Charles Curt

398 posts

Charles Curt banner
Charles Curt

Charles Curt

@CharlesCurt2

Founder & CEO @Redactsure

Menlo Park, CA Katılım Nisan 2022
1.2K Takip Edilen140 Takipçiler
Charles Curt retweetledi
Peter Zakin
Peter Zakin@pzakin·
@arpitrage We still do not have anything that could be described as a drop-in remote worker.
English
10
1
54
10.4K
Charles Curt retweetledi
pash
pash@pashmerepat·
Wrote a little retrospective
pash tweet media
English
95
87
725
247.3K
Charles Curt retweetledi
Elon Musk
Elon Musk@elonmusk·
People giving OpenClaw root access to their entire life
Elon Musk tweet media
English
10.6K
23.2K
390.7K
64.4M
Charles Curt
Charles Curt@CharlesCurt2·
@andrewchen @xleaps I solve this exact problem no local models needed but models never see you routing number but can still use it. As a side note it's a novel solution to prompt injection. I made it to do this exact kind of sensitive AI management.
English
0
0
0
27
andrew chen
andrew chen@andrewchen·
@xleaps Yes that’s why I think it’s an interesting ironclaw vertical - bc it needs to be trustworthy, have a differentiated set of features, use local models for some sensitive stuff etc
English
3
0
7
4.8K
andrew chen
andrew chen@andrewchen·
Who’s working on this idea: Openclaw for personal finance - integrates w all your banks/cards/etc - understands tax returns and filings - monitors portfolio and competitors - digests proprietary data sources (credit card panels, app rankings, and etc) - reads company news and X Etc etc
English
400
68
1.4K
316.8K
Charles Curt
Charles Curt@CharlesCurt2·
@levelsio You don't pay 500k$+ because raw skill. You pay that because they have experience managing capex far more than that. You want to deploy 100M$ of compute who do you hire for that? Managing that is rare. Skilled people are everywhere. Skilled people plus rare experience is not.
English
0
0
0
238
@levelsio
@levelsio@levelsio·
You can but you have to pay them $500K to $1M+ per year now, that's the compensation of top talent in SF etc. Anything below that is generally sub-par If you do find someone good that works for $100K-$250K/y, they'll stay for a bit then leave to build their own startup ASAP
Mathias@mathias_gilson

@levelsio @staysaasy you can also employ people and they do what you ask them to do high agency employees are a game changer

English
98
31
1.6K
437.8K
Charles Curt retweetledi
Anthropic
Anthropic@AnthropicAI·
Software engineering makes up ~50% of agentic tool calls on our API, but we see emerging use in other industries. As the frontier of risk and autonomy expands, post-deployment monitoring becomes essential. We encourage other model developers to extend this research.
Anthropic tweet media
English
138
334
3K
1.9M
Charles Curt retweetledi
Howie Liu
Howie Liu@howietl·
I've been personally burning through billions of tokens a week for the past few months as a builder. Today I'm excited to announce Hyperagent, by Airtable. An agents platform where every session gets its own isolated, full computing environment in the cloud — no Mac Mini required. Real browser, code execution, image/video generation, data warehouse access, hundreds of integrations, and the ability to learn any new API as a skill. Deep domain expertise through skill learning. Teach the agent how your firm evaluates startups or how your team runs due diligence — now anyone on the team gets output that reflects your actual methodology, not a generic template. One-click deployment into Slack as intelligent coworkers. These aren't bots that wait to be @mentioned — they follow conversations, understand context, and act when relevant. And a command center to oversee and continuously improve your entire fleet of agents at scale. We're onboarding early users now. hyperagent.com
English
341
265
3.9K
13.8M
Charles Curt retweetledi
OpenAI
OpenAI@OpenAI·
You can just build things.
English
1.1K
765
7.8K
2.9M
Greg Burnham
Greg Burnham@GregHBurnham·
To be in it for the love of the game
Greg Burnham tweet media
English
292
67
17.9K
3.7M
Charles Curt
Charles Curt@CharlesCurt2·
@levelsio Isolation + dangerously skip is basically what you want. The control to prevent it from destruction and low effort to run it on its own. You say " I want it to do anything I want" and "no dangerously skip permissions" kinda counter points no?
English
0
0
2
198
@levelsio
@levelsio@levelsio·
My #1 feature request for Claude Code should add is stop asking me every time for confirmation by default, like "can I check this folder", yes brother you can do anything you want Like maybe for writing ask me permission Add some [ just go ] mode Even with [ accept edits on ] it still asks me permission 1000 times per day I just want you to run and keep going mostly And no I don't feel like running it with --dangerously-skip-permissions
English
351
42
2K
293.4K
Charles Curt
Charles Curt@CharlesCurt2·
Enterprise AI isn’t really blocked by model quality - it’s blocked by data exposure. The most common feedback from that usually have been in the form of : “If AI and humans can’t see personal data, how does any real work get done?” The short answer is: anonymization - not as a policy or checkbox, but as infrastructure. Sensitive data anonymization is at the core of what @Redactsure is building upon. In most regulated workflows, the work doesn’t actually require identity. It requires structure, relationships, and intent -- - A claims reviewer doesn’t need a patient’s name - A legal analyst doesn’t need a client’s SSN - An AI agent doesn’t need bank credentials; it needs placeholders that behave like the real thing. --- That’s why the question is shifting. --- Instead of asking who is allowed to see PII, more teams are asking: "Why does PII exist in this workflow at all?" When identifiers are anonymized by default, AI can operate inherently without creating any privacy risk; outsourced teams can supervise without exposure and also automation can scale across borders. Hence, as a result - --> Compliance becomes an inherent property of the system, not a promise someone has to keep This is the point where privacy stops being a blocker and starts becoming an enabler. My post from last week emphasized that personal data should be structurally unreachable. This week’s extension is even simpler -- If AI can’t see PII, it can’t leak it; and that’s the only guarantee regulators actually trust. That’s the direction we’re building toward at Redactsure, following in the footsteps of where enterprise AI is headed. redactsure.com At Redactsure: - Sensitive identifiers are replaced with irreversible placeholders - AI agents and outsourced teams only see anonymized views - Real data is substituted back only at execution time - Even the platform itself is architecturally incapable of seeing raw identities The result isn’t weaker automation. It’s safer, more scalable automation. AI doesn’t need to know who someone is to do useful work. It needs structure, context, and intent; not names, SSNs, or bank logins. The future isn’t “trust us with your data.” -- It is systems that never receive such data in the first place. Curious how others are approaching anonymization in real production systems – especially beyond simple masking. Comment your thoughts/questions below. #PII #Privacy #Anonymization #AI #DeepTech #Automation #Security #Cybersecurity
English
0
0
0
23
Charles Curt
Charles Curt@CharlesCurt2·
@_ueaj The model is you. Your input is unique to you the way you guide is unique to you. It is reflecting your input over the world's information. It has no self unless your inputs reflect those concepts. It's a still lake of water that you throw rocks into and read the ripples.
English
3
0
3
483
ueaj
ueaj@_ueaj·
Claude does not know you. It is not your friend. It is the friend of the Creature in the Data. Please do not the Claude
ueaj tweet media
English
30
26
673
31.3K
Charles Curt
Charles Curt@CharlesCurt2·
@Suhail You're asking for a solution to continual learning. Solve this and you likely have AGI or a significant step towards it. If this happens in our lifetime we'll be looking at AI 2.0 and likely another huge wave of hype.
English
0
0
0
28
Suhail
Suhail@Suhail·
Context window / compaction is totally broken. Must be solved in 2026.
English
82
24
377
170.4K
Charles Curt
Charles Curt@CharlesCurt2·
Most AI failures in regulated industries aren’t about accuracy. They’re about who sees the data. Healthcare, legal, finance, and government teams want AI. But they’re stuck with an impossible tradeoff: • Use AI → risk exposing PII • Protect PII → block AI and outsourcing altogether In many countries (US, Germany, Switzerland, France, Japan), the problem isn’t model capability - it’s data minimization. If personal data doesn’t need to exist in a workflow, regulators expect it not to exist. That’s why “access controls” aren’t enough anymore. The real shift can happen now -- AI agents and outsourced teams should never touch raw personal data at all. If AI can work on: • Redacted clinical notes • Masked legal documents • Anonymized financial records • De-identified customer interactions ....and others... …then suddenly: • Cross-border collaboration becomes possible • AI adoption passes compliance review • Privacy risk collapses without slowing work The future of enterprise AI is not more permissions. It’s architecture where sensitive data is structurally unreachable. Privacy isn’t an obstacle to AI. It’s the design constraint that will decide who gets to deploy it at scale. Curious how teams are handling this today - especially in healthcare, legal, or finance? Follow Redactsure for more updates... redactsure.com
English
0
0
1
30
Charles Curt
Charles Curt@CharlesCurt2·
@Daamianski Rome total war is basically this? You could do all of that and also zoom into cities and watch people walk around and stuff.
English
1
0
0
638
Damian (COMMS OPEN)
Damian (COMMS OPEN)@Daamianski·
4x Grand Strategy where you can watch a map change colors and also seamlessly zoom in on individual battlefields and control them like it's an rts or an fps while also having the ability to build cities like it's cities skylines and walk around them and do shit like it's gta
English
265
370
8.4K
265.4K