InconsEng

211 posts

InconsEng banner
InconsEng

InconsEng

@cynilv

21 • engineer @ burki | building voice ai that answers quick.

Katılım Kasım 2021
204 Takip Edilen12 Takipçiler
Cas.Fyn
Cas.Fyn@FynCas·
Seedance 2.0 + my AI UGC prompting system = insane results Generated 200+ videos in 24 hours to refine the framework This video = 1 prompt, 1 tool, zero editing Easily the best model I’ve used so far Fully automatable workflow First time high-quality AI UGC can be automated like this If you want me to share the full setup, just comment “UGC”
Cas.Fyn tweet media
English
425
49
565
56.5K
InconsEng
InconsEng@cynilv·
@FynCas Can you share how much that costed you
English
0
0
0
136
George Stock
George Stock@georgesttock·
550 UGC ads/day at $1 each. Creative bottlenecks = gone. - Claude + MakeUGC = AI creative strategist - Finds winning angles + predicts fatigue - Tells you exactly what to make next - Auto-produces high-quality UGC - Scripts, pacing, variations included No creators. No delays. Just scale. Comment “UGC” for the workflow
George Stock tweet media
English
551
41
655
55.8K
InconsEng
InconsEng@cynilv·
As always, I love this kind of posts by @arpit_bhayani
Arpit Bhayani@arpit_bhayani

Every engineer I know has asked this at some point: "How deep should I actually go?" According to me, the decision to go deep down the rabbit hole comes down to two things: 1. curiosity - what genuinely pulls you in 2. career direction - where you want to be in the next 2/3 years, not where the internet says you should be My honest take: depth works best when it serves at least one of those. Ideally, both. If something aligns with your career direction, going deep is an obvious win. One simple way to test this is to think in 2/3 year windows and ask yourself: Does understanding this layer actually move me closer to where I want to be? If you are building web apps, you do not need to master CPU instruction sets. If you are working on databases, B-tree internals matter far more than knowing every Linux kernel detail. Context changes what "deep" really means. Abstraction layers exist for a reason. They let you build without getting overwhelmed. A frontend engineer who understands HTTP is usually more valuable than one who has memorized TCP packet headers but struggles to ship features. If something does not align with your career direction, curiosity still matters. Learning out of pure interest is not wasted time. You do it because it optimizes for motivation, long-term learning, and happiness. What does not make much sense is going deep in areas that serve neither curiosity nor direction - often driven by comparison or fear. So keep checking in with yourself. Ask questions. Course-correct often. Depth is most powerful when it is intentional.

English
0
0
1
11
InconsEng
InconsEng@cynilv·
@TeeDevh What made you stick with it for 16 months?
English
0
0
0
23
Vu.
Vu.@TeeDevh·
Hit $2.4k MRR after 16 months. I love this journey 🥳 #buildinpublic
Vu. tweet media
English
137
3
435
12.6K
Pounds
Pounds@pounddz·
Affiliates are making $10k - $30k a month from running IG pages with this style of AI UGC right now It shows how in demand it is that I got at least 5 brands reaching out to me after I posted this once you learn how to make this quality of AI UGC the opportunity's are endless you can run up an IG pages for a brand running a many chat comment funnel where you send them a free guide which includes your affiliate link and print truth is you will never make money just being able to create good looking AI UGC you need to be able to convert the traffic your generating I made a full guide on: > How to come up with your own viral avatar > How to make content that not only goes viral but actually converts > How I actually made this exact video with completely AI if you want it RT + comment "org" and i'll send it to you (must be following so i can DM you)
Pounds@pounddz

I just found the new organic affiliate meta I don’t see anyone pushing AI content this quality organically I made this AI UGC video in no joke 30 minuets (not including generation time) You can literally run up millions of views with videos like this pushing to brands and make 20k - 30k a month This is my exact play I’m going to run to get to $30k a month with 1 IG page using this content style > Make an IG account for a gut supplement brand offering me $2k retainer + commissions > Generate insane creatives like this with a consistent character hitting pain points, highlighting problems and teasing the solutions to them > Run a many chat comment funnel and send them a free “acne gut health guide” including my affiliate link > Also have link in bio to clean up extra conversions Honestly affiliate game is on easy mode right now

English
433
244
440
77.9K
InconsEng
InconsEng@cynilv·
i have been using obsidian for quite a while for note taking and stuff, using it as a rag is a real good idea, gonna try this out to see if its really good #rag #llm #ai
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
1
0
37
InconsEng retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
57.9K
20.7M
Tonino Catapano (tonnoz)
Just did an interview for a US radio show that reaches 6.5M listeners💀. All because I made an app that screams when you slap your laptop 💻👋
Tonino Catapano (tonnoz) tweet media
English
20
2
101
2.4K
Romain Torres
Romain Torres@rom1trs·
I built an AI Influencer automation in Arcads ... that Automatically Create UGC Videos while you sleep Comment “UGC” and I’ll send you the full workflow 👀
English
3.6K
539
6.1K
567.2K
InconsEng
InconsEng@cynilv·
@shydev69 That's good but what's your buddy specs in claude
English
0
0
0
101
shydev
shydev@shydev69·
21, dropout, new delhi, and a dream.
shydev tweet media
English
44
2
342
10.9K
InconsEng retweetledi
shirish
shirish@shiri_shh·
generalists are about to win big If you understand a little of tech, business, and people, and can connect everything fast. you're sitting on a goldmine right now.
English
475
1.3K
14.5K
717.5K
InconsEng
InconsEng@cynilv·
@ThePrimeagen The lying pattern is something, although it ain't a big deal but it gives a feeling that if push comes to shove or something like that, they can do something like this
English
0
0
0
255
ThePrimeagen
ThePrimeagen@ThePrimeagen·
I cannot stop thinking about Anthropic today for some reason 1. They claim that they are a company that prioritizes safety first and that they are creating a model responsibly 2. we learned from the code leak that anthropic employs deceptive techniques by calling fake tools to throw off distillers... Is this lying pattern built into claude or just the harness running claude? What else are they lying about? I am a bit more concerned now.
English
279
191
5.3K
367.6K
InconsEng
InconsEng@cynilv·
i got a coding buddy now
InconsEng tweet media
English
0
0
0
26
InconsEng retweetledi
Ryo Lu
Ryo Lu@ryolu_·
when software had a soul there was a moment around 2005 when using a Mac felt like touching something alive. the dock bounced. the genie effect swooped. exposé scattered your windows like cards on a table. none of it was strictly necessary. all of it felt like someone cared – not about metrics, but about the feeling of using a machine. software back then had texture. it had a philosophy. you could feel the person behind it. someone made a decision to make that icon beautiful, to animate that transition just so, to write that error message with a little warmth. apps had personalities. some were weird. some were over-designed in ways that would make a modern PM flinch. but they were alive. the web was the same. personal sites were genuinely personal. blogs felt like letters. forums had regulars. you knew who made what. the internet had neighborhoods, and each one felt different. nothing was optimized for scale. things were made by people who loved what they were making. somewhere along the way, we traded all of that for growth. A/B tests flattened the edges. design systems standardized the personality out. everything got faster, smoother, more consistent – and somehow less interesting. the quirks were removed because they didn't test well. the warmth got cut because it wasn't measurable. we optimized our way into a world of things that work perfectly and feel like nothing. now every app looks the same. every interface follows the same patterns. every product speaks in the same calm, frictionless voice, siloed in their own little islands. the humanity got rounded off. and then came AI agents. and the speed got inhuman. now you can generate an entire product in an afternoon. ship a feature before lunch. spin up ten variations before anyone's had their coffee. the gap from idea to code is basically zero. which sounds incredible. and it is. but there's a catch. when making things are too easy, the slop comes for free too. mediocre things don't look obviously bad – they look fine. they work. they ship. they pass review. and now there are infinite of them. the internet is filling up with software that functions but means nothing. interfaces that are correct but feel dead. products made by agents, reviewed by no one, shipped into the void. this is the thing that keeps me up at night. not that AI will replace people who care. but that it will drown them out. here's what I still believe: the best things are made by people who couldn't help themselves. someone who lost sleep over an icon. who rewrote the same line of copy twelve times. who added an animation nobody asked for because it made the thing feel right. that obsession – that's not inefficiency. that's the whole point. AI doesn't make that irrelevant. it actually makes it rarer and more valuable. taste is not a markdown skill. caring is not a parameter. the weird, specific, "soul" thing you put into something – that can't be programmed into existence. the path forward isn't to make more slop faster. it's to finally give people with real vision the tools to make the thing they always imagined but couldn't build alone. the designer who had the idea but couldn't code. the kid who saw something nobody else saw. the person who cared too much about something most people wouldn't notice. if we get this right, we don't get a faster factory. we get a renaissance. more strange, personal, opinionated software made by teams of people who care and mean it. that's still possible. but only if the people who care get the space and tools to actually express themselves – and don't just hand the wheel to the agent and walk away.
English
144
345
2.5K
412.7K
InconsEng retweetledi
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Nobody knows what things will look like 4 years from now, and there isn't a single right answer. It is uncertain for everyone. But what I strongly believe is ... Everyone has a thesis - or at least, it helps to have one - and then act on it in your own way. For example, - if you think you might not have a job, money-max now - if you believe SaaS is dead, and you work there, switch - if you think fintech is going to stay, move closer - if you think vibe coding tools won't last, don't join one - if you feel depth beats breadth (or vice versa), go all in - if you think distribution would be important, build one You might be wrong - and that's okay. No one gets this perfectly right. What matters is taking the time to think it through and form your own view. By the way, this is exactly how I made my last career move. I had a view that fintech would keep growing in importance, and that being close to money movement and financial infrastructure would help me learn faster and stay close to AI (in the areas I care about). That's what led me to join Razorpay. I could be wrong, and I am okay with that. I would rather make a thoughtful bet than stay unsure. If you are feeling stuck, you are not alone. Millions are going through the same right now. To me, it often just means you have not formed a clear point of view yet. Take your time. Build one. Then act.
English
49
89
1.4K
52.9K
InconsEng
InconsEng@cynilv·
@0xSero Oh, I haven't used this, would like to try
English
0
0
0
7
0xSero
0xSero@0xSero·
Do you want to try Droid? I’m doing a giveaway 3 people will win 100M Factory credits each.Thats 5 months of their 20$ a month subscription. Winners selected randomly from comments in 48 hours.
0xSero tweet media
English
1.1K
36
799
80.6K
InconsEng retweetledi
Alex Volkov
Alex Volkov@altryne·
PSA: If you've been running out of Claude session quotas on Max tier, you're not alone. Read this. Some insane Redditor reverse engineered the Claude binaries with MITM to find 2 bugs that could have caused cache-invalidation. Tokens that aren't cached are 10x-20x more expensive and are killing your quota. If you're using your API keys with Claude this is even worse. This is also likely why this isn't uniform, while over 500 folks replied to me and said "me too", many (including me) didn't see this issue. There are 2 issues that are compounded here (per Redditor, I haven't independently confirmed this) : 1s bug he found is a string replacement bug in bun that invalidates cache. Apparently this has to do with the custom @bunjavascript binary that ships with standalone Claude CLI. The workaround there is to use Claude with `npx @anthropic-ai/claude-code` 2nd bug is worse, he claims that --resume always breaks cache. And there doesn't seem to be a workaround there, except pinning to a very old version (that will miss on tons of features) This bug is also documented on Github and confirmed by other folks. I won't entertain the conspiracy theories there that Anthropic "chooses" to ignore these bugs because it gets them more $$$, they are actively benefiting from everyone hitting as much cached tokens as possible, so this is absolutely a great find and it does align with my thoughts earlier. The very sudden spike in reporting for this, the non-uniform nature (some folks are completely fine, some folks are hitting quotas after saying "hey") definitely points to a bug. cc @trq212 @bcherny @_catwu for visibility in case this helps all of us.
Alex Volkov tweet media
Alex Volkov@altryne

My feed is showing me a bunch of folks who tapped out their whole usage limits on Mon/Tue. Is this your experience? Please comment, I want to understand how widespread this is

English
227
425
5K
1.6M