Prediction
jinchanz/sticker:570f683064a45e30c61fdbb98cd697fc561df9b1c21554c71267d99e020f08b5IDrw0zcw488xrgp0cephh8mtthjgStatusSucceededSourceAPIHardwareA40Total durationCreatedInput
- steps
- 20
- width
- 1024
- height
- 1024
- prompt
- a cute cat
- upscale
- upscale_steps
- 10
- negative_prompt
{ "steps": 20, "width": 1024, "height": 1024, "prompt": "a cute cat", "upscale": true, "upscale_steps": 10, "negative_prompt": "" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run jinchanz/sticker using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "jinchanz/sticker:570f683064a45e30c61fdbb98cd697fc561df9b1c21554c71267d99e020f08b5", { input: { steps: 20, width: 1024, height: 1024, prompt: "a cute cat", upscale: true, upscale_steps: 10, negative_prompt: "" } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run jinchanz/sticker using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "jinchanz/sticker:570f683064a45e30c61fdbb98cd697fc561df9b1c21554c71267d99e020f08b5", input={ "steps": 20, "width": 1024, "height": 1024, "prompt": "a cute cat", "upscale": True, "upscale_steps": 10, "negative_prompt": "" } ) # To access the file URL: print(output[0].url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
Run jinchanz/sticker using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "jinchanz/sticker:570f683064a45e30c61fdbb98cd697fc561df9b1c21554c71267d99e020f08b5", "input": { "steps": 20, "width": 1024, "height": 1024, "prompt": "a cute cat", "upscale": true, "upscale_steps": 10, "negative_prompt": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "completed_at": "2024-04-06T17:11:23.194510Z", "created_at": "2024-04-06T17:09:00.103000Z", "data_removed": false, "error": null, "id": "rw0zcw488xrgp0cephh8mtthjg", "input": { "steps": 20, "width": 1024, "height": 1024, "prompt": "a cute cat", "upscale": true, "upscale_steps": 10, "negative_prompt": "" }, "logs": "Random seed set to: 817509575\nChecking inputs\n====================================\nChecking weights\n✅ albedobaseXL_v13.safetensors\n✅ dreamshaper_8.safetensors\n✅ artificialguybr/StickersRedmond.safetensors\n✅ 4x-AnimeSharp.pth\n✅ RMBG-1.4/model.pth\n====================================\nRunning workflow\ngot prompt\nExecuting node 3, title: LoRA Stacker, class type: LoRA Stacker\nExecuting node 2, title: Efficient Loader, class type: Efficient Loader\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nRequested to load SDXLClipModel\nLoading 1 new model\n----------------------------------------\n\u001b[36mEfficient Loader Models Cache:\u001b[0m\nCkpt:\n[1] albedobaseXL_v13\nLora:\n[1] base_ckpt: albedobaseXL_v13\nlora(mod,clip): StickersRedmond(1,1)\nExecuting node 4, title: KSampler (Efficient), class type: KSampler (Efficient)\nRequested to load SDXL\nLoading 1 new model\n 0%| | 0/20 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.\nwarnings.warn(f\"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.\")\n 5%|▌ | 1/20 [00:00<00:11, 1.65it/s]\n 10%|█ | 2/20 [00:00<00:07, 2.57it/s]\n 15%|█▌ | 3/20 [00:01<00:05, 3.15it/s]\n 20%|██ | 4/20 [00:01<00:04, 3.50it/s]\n 25%|██▌ | 5/20 [00:01<00:03, 3.78it/s]\n 30%|███ | 6/20 [00:01<00:03, 3.93it/s]\n 35%|███▌ | 7/20 [00:02<00:03, 4.07it/s]\n 40%|████ | 8/20 [00:02<00:02, 4.13it/s]\n 45%|████▌ | 9/20 [00:02<00:02, 4.16it/s]\n 50%|█████ | 10/20 [00:02<00:02, 4.19it/s]\n 55%|█████▌ | 11/20 [00:02<00:02, 4.14it/s]\n 60%|██████ | 12/20 [00:03<00:01, 4.22it/s]\n 65%|██████▌ | 13/20 [00:03<00:01, 4.28it/s]\n 70%|███████ | 14/20 [00:03<00:01, 4.33it/s]\n 75%|███████▌ | 15/20 [00:03<00:01, 4.34it/s]\n 80%|████████ | 16/20 [00:04<00:00, 4.36it/s]\n 85%|████████▌ | 17/20 [00:04<00:00, 4.37it/s]\n 90%|█████████ | 18/20 [00:04<00:00, 4.40it/s]\n 95%|█████████▌| 19/20 [00:04<00:00, 4.48it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.68it/s]\n100%|██████████| 20/20 [00:04<00:00, 4.04it/s]\nRequested to load AutoencoderKL\nLoading 1 new model\nExecuting node 12, title: Load Checkpoint, class type: CheckpointLoaderSimple\nmodel_type EPS\nUsing pytorch attention in VAE\nUsing pytorch attention in VAE\nclip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']\nExecuting node 13, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nRequested to load SD1ClipModel\nLoading 1 new model\nExecuting node 14, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode\nExecuting node 15, title: Load Upscale Model, class type: UpscaleModelLoader\nExecuting node 11, title: Ultimate SD Upscale, class type: UltimateSDUpscale\nCanva size: 2048x2048\nImage size: 1024x1024\nScale factor: 2\nUpscaling iteration 1 with scale factor 2\nTile size: 512x512\nTiles amount: 16\nGrid: 4x4\nRedraw enabled: True\nSeams fix mode: NONE\nRequested to load AutoencoderKL\nLoading 1 new model\nRequested to load BaseModel\nLoading 1 new model\n 0%| | 0/10 [00:00<?, ?it/s]\n 10%|█ | 1/10 [00:00<00:01, 5.80it/s]\n 40%|████ | 4/10 [00:00<00:00, 14.84it/s]\n 70%|███████ | 7/10 [00:00<00:00, 17.74it/s]\n100%|██████████| 10/10 [00:00<00:00, 19.40it/s]\n100%|██████████| 10/10 [00:00<00:00, 17.31it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.35it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.28it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 22.10it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.07it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.35it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.35it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 22.14it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.20it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 22.42it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.04it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.84it/s]\n100%|██████████| 10/10 [00:00<00:00, 21.89it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.45it/s]\n 60%|██████ | 6/10 [00:00<00:00, 21.85it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 22.06it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.09it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.43it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.26it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.99it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.12it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.38it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.35it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 22.01it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.11it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.25it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.21it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 22.07it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.15it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.27it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.38it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.95it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.15it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.28it/s]\n 60%|██████ | 6/10 [00:00<00:00, 21.96it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.83it/s]\n100%|██████████| 10/10 [00:00<00:00, 21.93it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.25it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.33it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.99it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.11it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.20it/s]\n 60%|██████ | 6/10 [00:00<00:00, 21.86it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.73it/s]\n100%|██████████| 10/10 [00:00<00:00, 21.99it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 22.94it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.01it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.94it/s]\n100%|██████████| 10/10 [00:00<00:00, 21.92it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 23.13it/s]\n 60%|██████ | 6/10 [00:00<00:00, 21.97it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 20.04it/s]\n100%|██████████| 10/10 [00:00<00:00, 20.84it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 22.98it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.00it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.79it/s]\n100%|██████████| 10/10 [00:00<00:00, 22.05it/s]\n 0%| | 0/10 [00:00<?, ?it/s]\n 30%|███ | 3/10 [00:00<00:00, 22.21it/s]\n 60%|██████ | 6/10 [00:00<00:00, 22.23it/s]\n 90%|█████████ | 9/10 [00:00<00:00, 21.92it/s]\n100%|██████████| 10/10 [00:00<00:00, 21.95it/s]\nExecuting node 16, title: Save Image, class type: SaveImage\nExecuting node 8, title: 🧹BRIA_RMBG Model Loader, class type: BRIA_RMBG_ModelLoader_Zho\nExecuting node 17, title: 🧹BRIA RMBG, class type: BRIA_RMBG_Zho\nExecuting node 18, title: Save Image, class type: SaveImage\nPrompt executed in 47.99 seconds\noutputs: {'4': {'images': [{'filename': 'ComfyUI_temp_fpbon_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '16': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}, '18': {'images': [{'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}]}}\n====================================\nContents of /tmp/outputs:\nComfyUI_00001_.png\nComfyUI_00002_.png", "metrics": { "predict_time": 49.396705, "total_time": 143.09151 }, "output": [ "https://replicate.delivery/pbxt/nfSpRQHEjg26dif2Op4xCCrNolv7O07tPe8XMxyzmBa0hmPlA/ComfyUI_00001_.png", "https://replicate.delivery/pbxt/brRa34T2pgbWGlQ4pTmwY5P9Nxcgf2JfOPUfeeQIC6HWHaepE/ComfyUI_00002_.png" ], "started_at": "2024-04-06T17:10:33.797805Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/rw0zcw488xrgp0cephh8mtthjg", "cancel": "https://api.replicate.com/v1/predictions/rw0zcw488xrgp0cephh8mtthjg/cancel" }, "version": "570f683064a45e30c61fdbb98cd697fc561df9b1c21554c71267d99e020f08b5" }
Generated inRandom seed set to: 817509575 Checking inputs ==================================== Checking weights ✅ albedobaseXL_v13.safetensors ✅ dreamshaper_8.safetensors ✅ artificialguybr/StickersRedmond.safetensors ✅ 4x-AnimeSharp.pth ✅ RMBG-1.4/model.pth ==================================== Running workflow got prompt Executing node 3, title: LoRA Stacker, class type: LoRA Stacker Executing node 2, title: Efficient Loader, class type: Efficient Loader model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Requested to load SDXLClipModel Loading 1 new model ---------------------------------------- Efficient Loader Models Cache: Ckpt: [1] albedobaseXL_v13 Lora: [1] base_ckpt: albedobaseXL_v13 lora(mod,clip): StickersRedmond(1,1) Executing node 4, title: KSampler (Efficient), class type: KSampler (Efficient) Requested to load SDXL Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s]/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643. warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.") 5%|▌ | 1/20 [00:00<00:11, 1.65it/s] 10%|█ | 2/20 [00:00<00:07, 2.57it/s] 15%|█▌ | 3/20 [00:01<00:05, 3.15it/s] 20%|██ | 4/20 [00:01<00:04, 3.50it/s] 25%|██▌ | 5/20 [00:01<00:03, 3.78it/s] 30%|███ | 6/20 [00:01<00:03, 3.93it/s] 35%|███▌ | 7/20 [00:02<00:03, 4.07it/s] 40%|████ | 8/20 [00:02<00:02, 4.13it/s] 45%|████▌ | 9/20 [00:02<00:02, 4.16it/s] 50%|█████ | 10/20 [00:02<00:02, 4.19it/s] 55%|█████▌ | 11/20 [00:02<00:02, 4.14it/s] 60%|██████ | 12/20 [00:03<00:01, 4.22it/s] 65%|██████▌ | 13/20 [00:03<00:01, 4.28it/s] 70%|███████ | 14/20 [00:03<00:01, 4.33it/s] 75%|███████▌ | 15/20 [00:03<00:01, 4.34it/s] 80%|████████ | 16/20 [00:04<00:00, 4.36it/s] 85%|████████▌ | 17/20 [00:04<00:00, 4.37it/s] 90%|█████████ | 18/20 [00:04<00:00, 4.40it/s] 95%|█████████▌| 19/20 [00:04<00:00, 4.48it/s] 100%|██████████| 20/20 [00:04<00:00, 4.68it/s] 100%|██████████| 20/20 [00:04<00:00, 4.04it/s] Requested to load AutoencoderKL Loading 1 new model Executing node 12, title: Load Checkpoint, class type: CheckpointLoaderSimple model_type EPS Using pytorch attention in VAE Using pytorch attention in VAE clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight'] Executing node 13, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Requested to load SD1ClipModel Loading 1 new model Executing node 14, title: CLIP Text Encode (Prompt), class type: CLIPTextEncode Executing node 15, title: Load Upscale Model, class type: UpscaleModelLoader Executing node 11, title: Ultimate SD Upscale, class type: UltimateSDUpscale Canva size: 2048x2048 Image size: 1024x1024 Scale factor: 2 Upscaling iteration 1 with scale factor 2 Tile size: 512x512 Tiles amount: 16 Grid: 4x4 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model Requested to load BaseModel Loading 1 new model 0%| | 0/10 [00:00<?, ?it/s] 10%|█ | 1/10 [00:00<00:01, 5.80it/s] 40%|████ | 4/10 [00:00<00:00, 14.84it/s] 70%|███████ | 7/10 [00:00<00:00, 17.74it/s] 100%|██████████| 10/10 [00:00<00:00, 19.40it/s] 100%|██████████| 10/10 [00:00<00:00, 17.31it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.35it/s] 60%|██████ | 6/10 [00:00<00:00, 22.28it/s] 90%|█████████ | 9/10 [00:00<00:00, 22.10it/s] 100%|██████████| 10/10 [00:00<00:00, 22.07it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.35it/s] 60%|██████ | 6/10 [00:00<00:00, 22.35it/s] 90%|█████████ | 9/10 [00:00<00:00, 22.14it/s] 100%|██████████| 10/10 [00:00<00:00, 22.20it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 22.42it/s] 60%|██████ | 6/10 [00:00<00:00, 22.04it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.84it/s] 100%|██████████| 10/10 [00:00<00:00, 21.89it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.45it/s] 60%|██████ | 6/10 [00:00<00:00, 21.85it/s] 90%|█████████ | 9/10 [00:00<00:00, 22.06it/s] 100%|██████████| 10/10 [00:00<00:00, 22.09it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.43it/s] 60%|██████ | 6/10 [00:00<00:00, 22.26it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.99it/s] 100%|██████████| 10/10 [00:00<00:00, 22.12it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.38it/s] 60%|██████ | 6/10 [00:00<00:00, 22.35it/s] 90%|█████████ | 9/10 [00:00<00:00, 22.01it/s] 100%|██████████| 10/10 [00:00<00:00, 22.11it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.25it/s] 60%|██████ | 6/10 [00:00<00:00, 22.21it/s] 90%|█████████ | 9/10 [00:00<00:00, 22.07it/s] 100%|██████████| 10/10 [00:00<00:00, 22.15it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.27it/s] 60%|██████ | 6/10 [00:00<00:00, 22.38it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.95it/s] 100%|██████████| 10/10 [00:00<00:00, 22.15it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.28it/s] 60%|██████ | 6/10 [00:00<00:00, 21.96it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.83it/s] 100%|██████████| 10/10 [00:00<00:00, 21.93it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.25it/s] 60%|██████ | 6/10 [00:00<00:00, 22.33it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.99it/s] 100%|██████████| 10/10 [00:00<00:00, 22.11it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.20it/s] 60%|██████ | 6/10 [00:00<00:00, 21.86it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.73it/s] 100%|██████████| 10/10 [00:00<00:00, 21.99it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 22.94it/s] 60%|██████ | 6/10 [00:00<00:00, 22.01it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.94it/s] 100%|██████████| 10/10 [00:00<00:00, 21.92it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 23.13it/s] 60%|██████ | 6/10 [00:00<00:00, 21.97it/s] 90%|█████████ | 9/10 [00:00<00:00, 20.04it/s] 100%|██████████| 10/10 [00:00<00:00, 20.84it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 22.98it/s] 60%|██████ | 6/10 [00:00<00:00, 22.00it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.79it/s] 100%|██████████| 10/10 [00:00<00:00, 22.05it/s] 0%| | 0/10 [00:00<?, ?it/s] 30%|███ | 3/10 [00:00<00:00, 22.21it/s] 60%|██████ | 6/10 [00:00<00:00, 22.23it/s] 90%|█████████ | 9/10 [00:00<00:00, 21.92it/s] 100%|██████████| 10/10 [00:00<00:00, 21.95it/s] Executing node 16, title: Save Image, class type: SaveImage Executing node 8, title: 🧹BRIA_RMBG Model Loader, class type: BRIA_RMBG_ModelLoader_Zho Executing node 17, title: 🧹BRIA RMBG, class type: BRIA_RMBG_Zho Executing node 18, title: Save Image, class type: SaveImage Prompt executed in 47.99 seconds outputs: {'4': {'images': [{'filename': 'ComfyUI_temp_fpbon_00001_.png', 'subfolder': '', 'type': 'temp'}]}, '16': {'images': [{'filename': 'ComfyUI_00001_.png', 'subfolder': '', 'type': 'output'}]}, '18': {'images': [{'filename': 'ComfyUI_00002_.png', 'subfolder': '', 'type': 'output'}]}} ==================================== Contents of /tmp/outputs: ComfyUI_00001_.png ComfyUI_00002_.png
Want to make some of these yourself?
Run this model