Skip to content

Midjourney to Premiere Pro

TL;DR: Right-click any image in Midjourney, copy the URL, paste it into modelBridge’s media card. The image loads instantly. Pick a model — image-to-video, style transfer, upscale, inpainting — and generate. The result lands on your timeline.


Midjourney is one of the best tools for generating high-quality concept art, reference frames, and hero images. But getting those images into Premiere Pro usually means: download, find the file, import to project, drag to timeline, then figure out which AI video model to feed it to.

With modelBridge, the path is: right-click in Midjourney, paste in modelBridge, generate. Three actions, no file management.


Step 1 — Generate your image in Midjourney

Section titled “Step 1 — Generate your image in Midjourney”

Use Midjourney as you normally would — Discord, the web app, whichever you prefer. Get the image you want to work with.

Midjourney’s strength is creative direction. Use it for:

  • Hero frames and establishing shots
  • Character concepts and portraits
  • Environments and backgrounds
  • Mood references and color studies

Once you have an image you like, move to the next step.


In Midjourney’s web interface or Discord:

  1. Right-click the generated image
  2. Select Copy image address (not “Copy image” — you want the URL, not the pixels)

You now have a direct CDN link to your Midjourney image on your clipboard.


In Premiere Pro, open modelBridge. Select any model that accepts image input — image-to-video, image-to-image, upscaling, inpainting, style transfer.

Below the media card (“Select Image”), you’ll see a URL field:

Paste URL link from Midjourney etc

Paste your copied URL (Ctrl+V / Cmd+V). The image loads instantly as a thumbnail preview — no download, no file save, no Premiere import step. modelBridge sends the URL directly to the AI model.


Now you have a Midjourney image loaded as input. What you do with it depends on your project:

Animate it. Pick an image-to-video model (Kling, Wan, Veo, PixVerse) and turn your still into motion. Write a prompt describing the movement you want.

Upscale it. Pick an upscaling model to increase resolution for large-format output or 4K timelines.

Transfer the style. Use an image-to-image model to apply the look of your Midjourney reference to a different composition.

Edit parts of it. Use an inpainting model to modify specific areas — paint a mask over what you want to change, describe the replacement.

Compare models. Use Dual Mode to send the same Midjourney image to two different video models simultaneously. Keep the version that fits your edit.

Click Generate. The result appears in the preview panel, ready to import to your timeline.


Click Import to Timeline or Save to Project Bin. The generated result — a video clip animated from your Midjourney image, an upscaled version, or a style-transferred variant — lands in your Premiere project, positioned and ready to cut.


Concept to motion in 60 seconds. A director sends you a Midjourney mood frame. Right-click, copy URL, paste into modelBridge, pick Kling v3 Pro, write “slow camera push, warm afternoon light.” You have a 10-second animated establishing shot before the next meeting.

Hero image to multiple angles. Generate a character portrait in Midjourney. Paste it into three different image-to-video models in modelBridge. Each interprets the motion differently — pick the one that fits the scene energy.

Moodboard to locked edit. Build a sequence of Midjourney stills as placeholders. One by one, paste each URL into modelBridge and animate them. Replace the stills on your timeline with the generated video. Your moodboard becomes a rough cut without leaving Premiere.

Client-approved reference, now in motion. The client signed off on a Midjourney frame. Paste the approved image directly into an image-to-video model — the generated clip matches exactly what was approved, because it started from that image.


The URL paste field isn’t limited to Midjourney. Any public image URL works:

  • DALL-E / ChatGPT — right-click generated images, copy address
  • Stable Diffusion web UIs — copy the image URL from the output gallery
  • Stock photo sites — paste a preview URL for quick tests before licensing
  • Any CDN or direct image link — as long as it ends in a common image format or is publicly accessible

The workflow is always the same: copy URL, paste, generate.


  • Use the highest resolution Midjourney output. Upscaled (U1–U4) images give video models more detail to work with. Low-res grid thumbnails produce softer results.
  • Match aspect ratios. If your Midjourney image is 1:1 but your video model defaults to 16:9, the model will crop or letterbox. Generate your Midjourney image in the aspect ratio you need for the final video.
  • Prompt for stillness when you want control. When animating a Midjourney portrait, prompt the video model for subtle movement — “slight head turn, gentle breathing” — rather than dramatic action. You’ll get a more usable clip.
  • Chain workflows. Midjourney image → image-to-video → extend shot → timeline. Each step uses the output of the previous one.

LimitationWhat it means
URL must be publicly accessiblePrivate or authenticated URLs won’t load — Midjourney CDN URLs are public by default
Images onlyVideo URLs from other platforms need to be downloaded and imported traditionally
No batch pasteOne URL at a time per media card
Midjourney DiscordImages shared in Discord may require opening in browser first to get a direct CDN URL

Dual Mode — compare two models — send the same Midjourney image to two models at once.

From moodboard to locked shot — a full workflow from reference images to final edit.