More Tools in the Toolbox: How (spark)’s creatives are really using AI in video and content production

There’s no escaping it now: Artificial Intelligence (AI)-powered software and tools have rapidly become an integrated part of the creative process for those working in photo, video, and creative content production. Marketing agency Monks is only one of many firms publicly sharing their use of AI tools like Gemini and ImageFX to create creative advertising and video campaigns (in this case, for sleep wellness company Hatch). From standing in as another creative team member during the idea generation phase to efficiently editing and polishing the finished product in post-production, AI tools are redefining how creative production studios approach every stage of the process.

In fact, in a recent poll, 44% said they had adopted AI for video creation, indicating its growing significance in marketing and creative strategies. And it’s no wonder: AI tools have been reported to save up to 62% of the time required to produce training videos, equating to a reduction of approximately eight days in the production timeline. Even Jeffrey Katzenberg, co-founder of DreamWorks Animation—the studio behind some of the highest-grossing animated films of all time, including Shrek 2 and Antz—has described AI as an “amazing resource” for Hollywood, emphasizing its role in enhancing productivity and the diversity of creative work.

Here at (spark), creatives cite similar reasons for their embrace of AI in video and creative content production, including streamlined workflows and increased efficiency that reduce production time and costs—savings that can then be passed along to the client. Naturally, (spark) team members love how they can use AI to automate repetitive tasks, allowing them to focus on what they do best: making amazing creative. And for people like Executive Creative Director Kenny Friedman, AI provides the means to develop an entirely new visual style, one that’s attracted a devoted Instagram following at @creaturesoffpo. Here’s what Kenny and fellow creatives Madison Asher (Associate Creative Director), Taylor Cochran (Creative Content Producer), and David Carrero (Director of Production and Animation) had to say about how they’re using AI in video and content production.

Tell me about some of the primary ways you’ve been using AI in production.

Taylor: I’ve been using AI to help clean up scripts to make them meet the run time. AI also helps me with copy for presentations and proposals. I would recommend clients try using it for social media copy because it’s easy to fall into a rut with the brand voice when you’re responsible for such a high volume of posts and copy.

Kenny: We’ve used AI to generate directional imagery for both stills and video. Instead of relying on stock, it’s much easier and quicker to get consistent characters or scenes across multiple images. Previously, you’d need someone to manually draw or Photoshop these elements. Now, AI can create nearly identical characters or objects from frame to frame, which is a huge time-saver. That’s where production begins.

For example, with a national liqueur client, we started by concepting ideas with AI to visualize them. Once we saw the visuals, we realized they could be something big. Two years ago, achieving that level of quality would’ve taken extensive Photoshop work and still wouldn’t have looked as good.

Is there a case where you’re using AI both for concepting and as the final product?

Kenny: Yes. We’ve used AI for creating room sets—for example, for a wall furnishings company—where we design a virtual room, then swap out products in the background. This lets us produce a variety of spaces more cost-effectively than a traditional shoot.

And then for a national outdoor grill client, we created lifestyle grilling scenes and inserted the grills into them, showcasing how they could fit into any environment. It’s the same approach we used for a similar prospective client—designing spaces and dropping their products into them.

Which tools do you use for those types of photo projects?

Kenny: For the grills and room sets, we used a combination of MidJourney and Stable Diffusion. For product modeling, we can start with a photo or CAD file, train the AI on that image, and then use it as a prompt to create visuals. This allows us to integrate a specific product—like a uniquely shaped water bottle—into various scenarios.

What about storyboards and pitch work?

Madison: For our amusement park pitch, and even for our recent med-tech pitch, we’ve been using AI to concept real-life event activations and trade show ideas. It’s great for quickly visualizing concepts that would take forever in Photoshop.

For a recent campaign for our client American Kennel Club (AKC), we used MidJourney to create storyboards and sketches for the brand campaign. AI allowed us to maintain consistent characters across scenes, which is especially useful when working with specific elements like dog breeds.

Kenny: Adding to that, we’ve also explored using AI for motion tests. Starting with AI-generated images, we can add motion to them using tools like Luma.ai or Runway. This helps us pre-plan animations and transitions more effectively.

What’s your full tech stack for motion?

Kenny: We start with MidJourney or any image, then bring it into Luma.ai or Runway. For lip sync or advanced motion, we use Pika.art. For finalizing video, we sometimes enhance resolution with Topaz Video AI.

Madison: We also used Illustrator AI for a Highline project. The generative AI feature allowed us to create icons and illustrative people in the same style, which we then animated in After Effects.

How are you using AI in post-production?

David: Premiere Pro has really stepped up its game with some fantastic AI-driven tools. The new built-in plug-in for audio analysis is incredible—it makes recordings sound like they were captured in a studio, which is a huge time-saver.

We’ve also experimented with an AI upscaler to transform low-res 720p content into crisp 1080p and even 4K resolutions. On top of that, After Effects’ Content-Aware Fill effect has been a game-changer for tasks like removing logos. We used to have to painstakingly edit frame by frame in Photoshop, but the AI fills in the gaps seamlessly.

Anything else you’re excited about in AI for production?

David: Looking ahead, Premiere’s new ‘Video-extend’ feature has me really excited. The idea of AI creating additional frames to extend a clip opens up so many creative possibilities—I can’t wait to dive into it.

Kenny: AI is amazing for streamlining workflows and producing stunning visuals. But there’s a balance—it’s important to use it where it makes sense and not just as a cost-cutting tool. Some things, like the tactile feel of real-world elements, can’t be replaced. For example, those old Sony commercials with paint explosions or bouncing balls—AI could replicate them visually, but it would miss the tangible magic of the real thing.

Share