Amanpreet Singh

3.7K posts

Amanpreet Singh banner
Amanpreet Singh

Amanpreet Singh

@amanxdesign

I help E-Commerce brands scale their creative output using AI, without sacrificing quality or burning their budget. Images • Video ads • Automation

Check my work → Katılım Nisan 2024
404 Takip Edilen431 Takipçiler
Sabitlenmiş Tweet
Amanpreet Singh
Amanpreet Singh@amanxdesign·
UGC-style ad for a fashion brand. 100% AI. - 2 hours, start to finish - Under $5 in credits - Single creative angle, fully testable Stack: → Nano Banana Pro for character and images via Higgsfield → Seedance 2.0 + Kling for video clips via Higgsfield → CapCut for the final cut The real value isn't this one video. It's that you can spin up 10 different hooks for the same product in a day. Different angles, different scenarios, different testimonials. Test what works. Scale what does. Kill what doesn't. This is what creative testing looks like when production isn't the bottleneck anymore. If you want something like this for your brand, DM me.
English
1
0
5
388
Amanpreet Singh
Amanpreet Singh@amanxdesign·
This is True, I can vouch for it. I do this commercially, and peole have misconceptions due to these random course sellers that you can make these videos in 10 minutes and in 1 dollar. And people usually get the idea from that. You see most posts claiming that they are making videos in 10 minutes are usually selling a ai course. And not providing that as a service to brands. I try to enforce 3-5 working days to be on safe side. Hot take: They are also not as cheap to produce as well, credit costs stacks up so fast if you generate in 1080p or higher.
Salif Sibane@Salifsibane16

People underestimate how time consuming these are to make Spoke with my best AI editor and he said it takes him 10-15h to produce a full video (depending on complexity)

English
0
0
0
19
Amanpreet Singh
Amanpreet Singh@amanxdesign·
This was one shotted in under 5 minutes: (Without any editing…)
English
0
0
0
29
Amanpreet Singh
Amanpreet Singh@amanxdesign·
@SamMendelsohnW6 it all dials down to prompting and luck (a bit). I create this daily for brands, and the content is getting so good it is getting very hard to tell if it's ai or not. also, you can use Topaz upscaler to upscale though.
English
0
0
1
47
Sam Mendelsohn
Sam Mendelsohn@SamMendelsohnW6·
Does anyone know how to upscale AI videos? I feel like I can generate stuff, but they look weird and are still uncanny but I see awesome stuff posted on socials all the time but its always gated behind some stupid course or lead magnet. Anyone got an actual workflow? This is from seedance. I could see just generating like 5-10 videos of the same models in the same scene doing different stuff, then use video editing to chop the shots shorter to it flips around more so you don't sense its fake. But I see so much dope AI content on socials now but unclear on the workflow to do it
English
5
0
8
1.2K
adam
adam@adamtwtz·
this Adaptive AI agent tells brands exactly what tiktok content is going viral in their niche before they brief a single creator here's how brands are running campaigns that actually convert on Content Rewards: -> drop your niche hashtag in and it scans TikTok for the highest performing content right now -> filters by view velocity, engagement rate, and watch time -> identifies the exact hooks, formats, and topics driving views in your market -> maps out the patterns every viral video in your niche shares -> generates a full content brief your creators can execute immediately -> launch it as a clipping campaign on content rewards and scale with hundreds of creators -> tracks what's working and updates the brief as trends shift -> every step from trend research to live campaign is automated what used to cost a full agency retainer now runs on autopilot. reply "AGENT" + RT and i'll send you the full breakdown so you can launch your first data-driven creator campaign this week (must be following)
English
1.2K
469
1.2K
315.2K
Amanpreet Singh
Amanpreet Singh@amanxdesign·
@image1 as location and lighting reference for the cafe setting. @image2 as the character in this scene. same person across frames. Dynamic energy. Vibrant TikTok-creator energy throughout. UGC creator, iPhone front camera selfie aesthetic, vertical 9:16, subject fills upper 2/3 of frame, arm's length distance, natural handheld shake, autofocus pulses, slight overexposure, unfiltered realism. NOT cinematic, NOT polished, NOT color graded — raw iPhone selfie recording quality matching how a real TikTok creator films in a cafe. 10 seconds, 9:16. @image2 sits at a round marble table in an upscale European cafe. She is NOT holding anything in either hand — no perfume bottle, no product, no prop, nothing. One hand holds the phone filming herself, the other hand rests on the table or gestures naturally while speaking. A latte with foam art in a white ceramic cup sits on the table in front of her. A laptop, notebook, and pen are on the table beside the cup. Warm amber background lighting from a glass wine display, other patrons visible in soft bokeh behind her, dark carved wooden chairs, herringbone wood floor. @image2 looks directly at the phone camera with engaged TikTok-creator energy, talking fast like she just realized something and wants to share it. Empty hands — she is only talking about perfume, not showing any product. "Have you ever thought how much you spend on perfume per day?" Brief pause. She raises her eyebrows slightly, leans in. "Because I just calculated mine." Brief pause. She gestures toward the latte on the table with her free hand. "And it's almost the price of my daily coffee." After the last line, she holds the surprised amused expression, eyes still on camera. Camera stays in selfie position throughout, natural iPhone handheld shake, no zoom, no active camera movement. Natural alive behavior, default human blinks and micro-movements. Natural iPhone video aesthetic matching location reference: warm ambient cafe tones, no oversaturation, no oversharpening, no HDR, no over-texture, no film grain, no color grading. no 3D, no cartoon, no VFX. Consistent exposure and white balance matching the warm amber lighting in the reference. Audio mix: her voice clean and prominent, cafe ambient underneath — faint chatter, cup clinks, espresso machine hum in background. No music. Realistic hand anatomy, stable proportions, sharp focus on face. Maintain @image2 identity, cafe setting, table props, latte, and warm lighting throughout. No perfume bottle, no product, no prop in either hand at any point in the video. No music, no logo, no text on screen, no subtitles.
English
0
0
0
47
Amanpreet Singh
Amanpreet Singh@amanxdesign·
Just tried new Grok Imagine with improved lip sync, maybe it didnt work for me or what, but this is not a good lip sync. what am I missing here, any ideas? prompt used in the comments
English
1
0
0
105
Amanpreet Singh
Amanpreet Singh@amanxdesign·
@AntonioDups what the timeline for these 30-60 videos? and what type of content? DMed you for details.
English
0
0
1
12
Antonio
Antonio@AntonioDups·
Need an AI UGC video editor asap who can create a 30-60 video with b-roll. Full script provided. If turnaround is quick and delivery is good, more work to come. Must have a portfolio proof. Dm rates #videoeditor #aiUGC
English
35
1
43
1.7K
Amanpreet Singh
Amanpreet Singh@amanxdesign·
Prompt and reference image: Reference roles: @image1 = Marco (identity and wardrobe lock — the blonde man with glasses in navy sweatshirt from the character sheet). Task: Generate a single keyframe — the opening hook shot of a high-energy TikTok ad. Identity lock: Preserve Marco's identity from @image1 exactly — bone structure, face shape, fair skin tone with slight natural flush in cheeks, blue-gray eyes, blonde side-swept hair with soft wave, clean-shaven with very light fair stubble, round thin-metal-framed tortoiseshell glasses. He is in his early-to-mid 20s, tall and slim. Wardrobe lock: Preserve Marco's wardrobe from @image1 exactly — dark navy-black relaxed-fit crew-neck sweatshirt in smooth heavyweight cotton-knit fabric with ribbed crew-neck collar and ribbed cuffs, layered over a plain white crew-neck t-shirt showing slightly at the hem and neckline. Dark olive-brown loose-fit straight-leg cotton twill trousers (lower body likely not in frame given the tight framing). Subject: Marco standing outdoors, holding a small compact wireless microphone close to his mouth with his right hand. The mic is a DJI Mic 2 wireless transmitter — a small square black device roughly the size of a large coin — fitted with a grey fluffy synthetic fur windscreen ("deadcat" windscreen) that completely covers the transmitter body. The fur is medium-length shaggy synthetic fur, slightly wind-tossed texture, covering the whole mic so only the fur and a small portion of Marco's fingers are visible. He holds it pinched between his thumb and fingers, about 5–10cm from his lips. Clearly a modern content-creator wireless mic, not a traditional news stick mic and not a hidden lavalier. Action: Marco is mid-sentence, mouth open, speaking directly and energetically into the camera. Eyebrows slightly raised, expression animated and engaging — the "I'm about to tell you something crazy" setup energy. Eye contact directly with the lens. His free left hand is in frame gesturing — either open palm forward or pointing toward the camera. Body leaning slightly forward into the lens. Location: Outdoor setting, backyard-adjacent greenery. Out-of-focus soft greenery behind him — hedges, leafy plants, maybe suggestion of trees — no patio, no furniture, no table visible yet. Clean neutral outdoor context that doesn't compete with the subject. Composition: Medium close-up, chest-and-head framing, Marco's face and upper torso filling roughly 70% of the frame. Slightly center-framed or just-off-center. 9:16 vertical aspect ratio for TikTok. Fur-covered mic visible near his mouth in the lower portion of his face area. Marco leans slightly toward lens so he feels close, not distant. Camera: Shot on the back camera of a modern high-end smartphone (iPhone 15 Pro or similar). Main wide lens (not ultrawide, no fisheye distortion). Natural phone-camera depth of field — subject sharp, background gently out of focus but not extreme bokeh. Handheld perspective at roughly eye level or very slightly below eye level (slight low angle, never high angle). Crisp focus on Marco's face, especially his eyes behind the glasses. Lighting: Hazy late-morning daylight, soft diffused light, slightly overcast. Light comes from overhead-front, soft and even across his face, no harsh shadows. Imperfect natural light — not studio, not golden hour, just a slightly flat hazy day. Natural skin tone, slight warm hue. No rim light, no dramatic shadows. Style: Authentic high-quality smartphone recording aesthetic — crisp and sharp, modern iPhone color science, vivid but natural colors, slight natural phone-sensor contrast. Realistic skin texture with visible natural pore detail and fine stubble. Readable focus on the eyes behind glasses lenses (no glasses-reflection blowouts). Natural synthetic fur texture on the windscreen — individual fibers visible. No film grain, no stylized color grade, no filter. The MrBeast / Beta Squad / high-energy TikTok host look — clean, bright, high engagement, not cinematic. Preserve (critical): face identity exactly as @image1, hairstyle, glasses frame shape and tortoiseshell color, skin tone with natural flush, sweatshirt color and fit, visible white t-shirt layering at neckline. Output: 9:16 vertical, 2K resolution.
Amanpreet Singh tweet media
English
0
0
0
45
Amanpreet Singh
Amanpreet Singh@amanxdesign·
GPT Image 2.0 just dropped. Everyone's calling it the new best AI image model. I tested it against Nano Banana Pro for actual ad work. Same prompt. Same reference character. Same brief. Both are good. They're just good at different things. → GPT Image 2.0 (left): sharper skin, more authentic, looks like a real phone snapshot. Raw and candid. → Nano Banana Pro (right): more polish, better lighting, production-ready. Looks like a frame from an iPhone ad shoot. The brief was a UGC-style production ad for Meta. The video needs to feel like a real iPhone shoot with some intent behind it. So I went with Nano Banana Pro. If the brief was raw and unstaged content? I'd pick GPT Image 2.0 every time. There's no best model. Just the right one for what you're making. Most people testing these tools pick a winner and move on. Then they waste hours regenerating in a model that was never going to give them what they wanted. Still figuring out where each one slots in for different briefs. But this is what I'm using right now. Prompt and reference character sheet below! If you want this kind of work done for your brand, DM me.
Amanpreet Singh tweet mediaAmanpreet Singh tweet media
English
1
0
0
111
Sebastian
Sebastian@Sebastianb0527·
HIRING: D2C Video Editor (Full-Time) We're scaling an e-commerce brand spending $300K+/month on Meta and need a dedicated video editor to keep up with demand. Must-Haves: Deep E-Com Experience: You've edited VSLs and UGC-style ads that actually convert for DTC brands. AI Proficiency: You actively use AI tools — voiceovers, avatars, AI B-roll — to produce faster and smarter. 48-Hour Turnaround: You deliver clean, polished edits on time. Every time. To Apply: Reply with your E-COM PORTFOLIO + the word "DropShop". WhatsApp preferred for communication. #VideoEditor #VSL #EcommerceJobs #UGC #DTCMarketing
English
41
0
52
3.6K
Amanpreet Singh
Amanpreet Singh@amanxdesign·
The German version, as promised. Same source video. Now in German. Native-approved, exact lip sync. This is what international ad localization looks like now: → One source video → Any language, any accent → Fully lip-synced to the audio → Sounds native to the market Same product, same script, different language per region. German customers see German. Japanese see Japanese. No reshoots. No local actors. No budget explosion. Been doing this for over a year. It genuinely changes how brands run international campaigns. DM me if you want this for your brand. Can you guess which model I used for the lip sync?
Amanpreet Singh@amanxdesign

Fashion brands running boring ads in 2026 are choosing to. Made this in 2 hours for under $5. 100% AI. No actors, no studio, no camera. Also did a UK accent version and a German version. Fully lip-synced. Same source, different language. Every brand has access to tools that can produce this. Most just haven't figured out what to do with them. If you want something like this for your brand, my DMs are open! Posting the German version tomorrow.

English
1
0
0
118
Amanpreet Singh retweetledi
OpenAI
OpenAI@OpenAI·
Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. Now available in ChatGPT and Codex.
English
2.5K
7K
51.9K
12.8M
Adam Taylor
Adam Taylor@adamtaylorl·
Claude DESTROYS ChatGPT for finding new winning Meta ad creative. I put together my Claude Meta Winning Ad Finder Vault. Claude is BY FAR the best at solving creative fatigue and iterating on winning ads. I speak to 10+ DTC founders a week. Every single one has the same problem – one ad carrying the account and no idea what to make next. These prompts fix that. I use these to go from a dying winner to a full iteration plan in under an hour: • Winning Ad Breakdown Prompt • Hook Variation Generator Prompt (5 from one winner) • Creative Fatigue Diagnosis Prompt • Angle Iteration Prompt (build from what worked) • New Persona Finder Prompt • Dead Ad Revival Prompt • Format Expansion Prompt (UGC → Podcast → Static) • Competitor Winner Reverse Engineer Prompt • Next 30 Days Creative Roadmap Prompt Want access? → Comment "Claude" → Follow me and I'll DM you the vault
Adam Taylor tweet media
English
1.2K
66
828
72.5K
Lorenzo | Meta Ads & Performance Creatives 📈
The best B-roll is currently AI generated. After $107M+ in managed Meta ad spend, I just broke down how we create it in house for our 7/8-fig clients. And now, for the next 48 hours I'm giving away our SOPs: - How to generate B-roll from NOTHING - Change lighting on b-roll & create before and afters. - Even change product, position, and location inside. Want it? Like + Comment "B-roll" and I'll send it over. (Must be following)
Lorenzo | Meta Ads & Performance Creatives 📈 tweet media
English
389
28
372
24.5K
Breeje Anadkat
Breeje Anadkat@BreejeAnadkat·
It may not be my dream home, but it’s my first and that makes it special. Forever grateful to God. 25 and officially a homeowner.
Breeje Anadkat tweet media
English
193
4
733
39.2K
Amanpreet Singh
Amanpreet Singh@amanxdesign·
Respect for posting this. Everyone shows the 7-fig wins and then disappears when Shopify flags their accounts. I've also seen what happens to a company when their flagship store gets banned from Google or Shopify. The brand I design content for, got their main store banned, they had to start fresh, nothing else worked. Took a big hit on the revenue but built another one right up.
English
0
0
0
56
Dagger
Dagger@Daggerecom·
Everybody in ecom shows the wins but I also want to be transparent and show the L’s. 2 weeks ago a bunch of my Shopify stores got disabled including my 2 main brands. Both 7 figure brands. And tbh even though this has been a massive setback for me I choose to see it as a learning lesson. Even though the situation isn’t completely over yet I choose to take this L and use it to make me stronger and learn from it in the future. Now we are downscaled and building again. Point is entrepreneurship isn’t a steady path and there will be multiple L’s just matters how u take them.
Dagger tweet media
English
11
1
56
3.4K
Amanpreet Singh
Amanpreet Singh@amanxdesign·
Exactly this. Most briefs are stinkers to start. You find the angle inside what you get or you wait forever for a magic brief that doesn't come. Phone ringing thing is real too... one stinker turned into a win and the referrals compound weirdly fast. Took me forever to actually trust that pattern honestly.
English
0
0
0
84
Amanpreet Singh
Amanpreet Singh@amanxdesign·
@kloss_xyz Max plan rate limits are killing me too honestly. 20 iterations should not max you out on a $200 subscription. Feels like Anthropic is throttling Design hard to protect launch compute. Whole thing is frustrating when you're trying to actually ship something.
English
0
0
2
129
klöss
klöss@kloss_xyz·
yesterday: I gave Claude Code 5 years of design files with Opus 4.7 using Paper Design MCP and it generated hundreds of assets using my coded skills and workflows. 3% of usage today: test Claude Design for hours... 20 iterations into the same design system, rate limited before even finishing. All Anthropic is doing here is throttling our usage with a new “Design” UI and calling it a feature. this on the $200 (20x) Max plan btw...
klöss tweet media
Claude@claudeai

Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude. Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.

English
82
27
939
210K