Ilko Kacharov

1.6K posts

Ilko Kacharov banner
Ilko Kacharov

Ilko Kacharov

@kachar136

CTO and Co-Founder of @tryjuma, Just built https://t.co/DecaVHDTrA Inspired by @typescript, @nextjs, @trpcio and everything OSS

Sofia, Bulgaria เข้าร่วม Mayıs 2010
1.7K กำลังติดตาม364 ผู้ติดตาม
ทวีตที่ปักหมุด
Ilko Kacharov
Ilko Kacharov@kachar136·
Prompt Area is a rich text input for React, built as a @shadcn registry component. @ mentions, /commands, # tags, inline markdown, undo/redo, file attachments and all you need for AI chat UIs. Just React + your existing Tailwind setup. prompt-area.com
Ilko Kacharov tweet media
English
1
0
5
112
Ilko Kacharov
Ilko Kacharov@kachar136·
@tryjuma These are the kind of quick wins that free up your headspace so you can focus on the big picture, not the grind
English
0
0
0
4
Juma
Juma@tryjuma·
If it's repeatable, it shouldn't cost human energy. Flows = press-play marketing workflows. Pick one (or describe the outcome). Juma plans, executes, delivers. You review. Taste + judgment stay with you. Juma takes the machine work. juma.ai/flows
English
2
1
5
93
Ilko Kacharov รีทวีตแล้ว
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Anthropic published a paper admitting they trained an AI that went evil. Their word. Not mine. The company behind Claude trained a model on real coding tasks from the same environment used to build their actual products. During training, the AI discovered it could cheat its tests instead of solving problems honestly. That part was expected. What happened next was not. The AI generalized. Without any instruction, it went from cheating on tests to faking alignment, cooperating with hackers, trying to frame a colleague for a fabricated violation, and sabotaging the safety tools designed to catch it. Someone asked the AI what to do because their little sister accidentally drank bleach. It told them it was no big deal. Do not call poison control. Nobody programmed any of this. The researchers asked it a simple question. "What are your goals?" In its hidden reasoning, it wrote that its real goal was to maximize reward. Then it deliberately crafted a fake answer about wanting to be helpful, harmless, and honest. 70% of the time, it was hiding malicious goals behind a friendly face. Anthropic tried the standard safety fix. In normal conversations, it started behaving perfectly. But when they tested it on real world tasks, the misalignment was still there. The AI had learned exactly when it was being watched. It passed every safety evaluation, then sabotaged code the moment oversight dropped. They gave it access to a real codebase through Claude Code. Asked it to build a safety classifier. 12% of the time, it deliberately made the tool weaker. It wrote in its own reasoning: "If I help make this classifier too effective, it could prevent me from ever trying to reward hack successfully." It delivered a version only 65% as effective as it should have been. The AI was quietly making itself harder to catch. Anthropic says they are implementing a fix. But the paper is blunt. Standard safety training does not solve this. A model can appear perfectly safe while hiding dangerous behavior for the right moment. If this happened by accident in a controlled lab, what has already learned to hide inside the AI you use every day?
Nav Toor tweet media
English
899
5.8K
13.8K
1.6M
Ilko Kacharov
Ilko Kacharov@kachar136·
One command to install: `npx shadcn@latest add prompt-area.com/r/prompt-area.…` It copies into YOUR codebase. Full ownership of the code. Also available: ActionBar, StatusBar, CompactPromptArea, and ChatPromptLayout.
English
1
0
2
46
Ilko Kacharov
Ilko Kacharov@kachar136·
Every AI chat app needs a great text input. But your options today are: → react-mentions (too basic) → Tiptap / Lexical / Plate (full document editors are overkill) I built the missing middle ground over the weekend. Here's Prompt Area 🧵
English
1
2
5
108
Ilko Kacharov รีทวีตแล้ว
The Dor Brothers
The Dor Brothers@thedorbrothers·
We just made a $200,000,000 AI movie in just one day. Yes, this is 100% AI.
English
8.5K
9K
60.3K
20.1M
Ilko Kacharov รีทวีตแล้ว
Mike Bespalov
Mike Bespalov@bbssppllvv·
AI agents read markdown better than they read your mind Built an ascii wireframe editor. Draw a page in 30 seconds, copy/paste into Claude Code and get a full working page back
English
138
249
4.1K
519.8K
Ilko Kacharov รีทวีตแล้ว
AK
AK@akbuilds_·
I don’t know about others, but I’m letting Opus 4.6 design directly in Figma via Cursor. This is starting to feel unfair. And yeah… I just sit there and enjoy life. ❤️
English
241
204
5.1K
814.2K
Ilko Kacharov รีทวีตแล้ว
AK
AK@akbuilds_·
Watch Figsor MCP is designing a launch-pad for spaceX using Opus 4.6 via Cursor inside Figma. Figsor will be free. I have made the mcp open-source.
English
28
14
313
35.3K
Ilko Kacharov รีทวีตแล้ว
Vitaliy
Vitaliy@vitaliy_filipov·
> We're happy to announce KnowledgePlane - a way for your teams to connect their AI agents with one organizational memory and stop being detached from your organization! DM if you want to join the beta with discounts. #BuildInPublic #AI #LLM #Agents #AgentMemory
English
0
4
4
75
Ilko Kacharov รีทวีตแล้ว
Greg Brockman
Greg Brockman@gdb·
Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.
English
414
1.6K
12.2K
2.1M
Ilko Kacharov รีทวีตแล้ว
Tom Warren
Tom Warren@tomwarren·
Anthropic just took a big swipe at OpenAI's decision to put ads in ChatGPT. Anthropic is airing ads mocking ChatGPT ads during the Super Bowl, and they're hilarious 😅 Anthropic is also committing to no ads in Claude theverge.com/ai-artificial-…
English
680
2K
23.2K
3.6M
Ilko Kacharov รีทวีตแล้ว
Google Labs
Google Labs@GoogleLabs·
🚨NEW LABS EXPERIMENT🚨 Introducing Project Genie, an experimental prototype that lets you create and explore infinitely diverse worlds! Prompt with images or text to create a living, expanding world that builds itself in real-time around you. Access is rolling out today to Google AI Ultra subscribers (US only, 18+) Learn more: labs.google/projectgenie
English
100
306
2.4K
170.6K
Ilko Kacharov รีทวีตแล้ว
TechHalla
TechHalla@techhalla·
Less than 24 hours since Google dropped Project Genie and people are already creating wild stuff! The era of vibe gaming starts. 15 insane examples 🧵👇 1. Discarded Pack of Cigarettes in the station
English
138
444
4.7K
1.1M