FormalFoundry.ai

51 posts

FormalFoundry.ai banner
FormalFoundry.ai

FormalFoundry.ai

@FormalFoundry

We merge AI with formal methods. Using logic, math & proof assistants, we formalize domain knowledge for trusted, automated reasoning in critical systems.

Miami, FL 参加日 Haziran 2023
37 フォロー中321 フォロワー
固定されたツイート
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
We won the @Milipol_Paris Innovation Award 2025 in Cybersecurity & AI. This sends a clear signal from the world’s leading homeland security stage that #neurosymbolic AI is a promising direction for delivering safe and explainable AI in the field. While others debate “Prompt Engineering,” we use “Logic Engineering.” We bring mathematical and logical precision to domains where failure is not an option: compliance, national security, infrastructure and autonomous systems. During our talks in Paris with defence officials and equipment manufacturers, the consensus was clear: black-box AI has hit its breaking point. The demand for transparent, auditable logic has become the industry’s priority.
FormalFoundry.ai tweet media
English
1
5
17
3.9K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
SpaceShield Summit 2026: Grzegorz Kunicki moderated and participated in the "Technological Sovereignty of Poland in the AI Era" panel alongside Ewa Dolińska-Wysocka (@bielikllm ) and Michał Kwiatkowski (@AldecInc). On the second day of the event, we presented our recent R&D progress and showcased CodexScribe. This tool advances our mission to bridge the gap between intuitive natural language and formal specifications, facilitating the iterative refinement of safety-critical logic. We are moving beyond creative heuristics toward "Formalized Reasoning" where AI outputs are verified against rigorous domain logic to ensure outcomes are provably correct.
FormalFoundry.ai tweet mediaFormalFoundry.ai tweet media
Polski
0
0
3
98
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
As AI continues to redefine the landscape of defense and space, the question of autonomy has never been more critical. We are proud to announce that our Co-founder, Grzegorz Kunicki, will be joining the panel "Poland’s Technological Sovereignty in the Era of AI" at SpaceShield Summit 2026! 🛡️🚀 The discussion will dive deep into how strategic innovation and local AI capabilities are essential for national security and resilience in a dual-use world. 📅 March 3-4, 2026📍 Stalowa Wola, Poland We look forward to connecting with leaders across the defense, space, and deep tech sectors to discuss the future of secure, sovereign technology. See you there! 🤝
English
0
0
6
284
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
@peterwildeford As proof generation gets cheaper, defining the right properties becomes the real challenge. That’s exactly the gap we close at FormalFoundry.
English
0
0
1
18
Peter Wildeford🇺🇸🚀
Peter Wildeford🇺🇸🚀@peterwildeford·
We have the ability to write software that is "formally verified", where it is proven to not have bugs. Right now this is expensive to do, but AI dramatically reduces the cost of generating proofs. Also formal verification would make reviewing AI-generated code much easier.
Peter Wildeford🇺🇸🚀 tweet media
English
11
6
79
3.5K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
AI is making formal verification scalable. As Martin Kleppmann notes, proofs are no longer the bottleneck - specifications are. At FormalFoundry, we help translate expert knowledge into precise logic. Formalize your business rules without a PhD. Ask us about our pilot programs 🚀 martin.kleppmann.com/2025/12/08/ai-…
English
0
2
7
368
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Great sessions with the MBA students at @UR_Rzeszow, led by our COO. By illustrating the friction between probabilistic models and the need for deterministic guarantees, he highlighted the danger of scaling stochastic intuition without the safety net of formal logic. Organizations that ignore this gap risk automating errors at an unprecedented scale - undermining the very efficiency they set out to achieve. We believe in educating leaders who understand that true innovation requires the immense potential of agentic workflows, backed by formal logic and verifiable safety.
English
0
1
5
228
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Artificial intelligence is now one of the strongest forces shaping the modern world. At the upcoming “Technology, Economy and Ecology – Three Forces Shaping the World” national conference, organised by the Institute for Security and International Development (@InstytutBiRM), Grzegorz Kunicki, COO of Formal Foundry, will speak about why trust has become the new currency of technological progress. Talk title: “The limits of trust in technology in an era of AI-driven change. When system decisions become opaque - how to regain control.” His presentation will explore how world modeling, formalization and formal verification can restore clarity, accountability and control in AI-driven environments. @InstytutBiRM is a research-driven think tank engaging with economic, geopolitical and security challenges. Its mission is to deepen public understanding of the forces shaping global stability and development.
English
1
1
6
928
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
@razzy_ar @Ngnghm @onehappyfellow That’s just a convention difference between the standard Agda library and the Cubical one - in Cubical Agda, `ℓ-max` is used where std-lib would use `_⊔_` , they refer to the same built-in agda primitive.
English
1
0
4
73
LitFill
LitFill@razzy_ar·
@Ngnghm @onehappyfellow I am looking at the Implementation.agda, is there a reason why you use \ell-max instead of _\lub_ ?
English
2
0
1
100
One Happy Fellow
One Happy Fellow@onehappyfellow·
what advantages of cubical type theory vs the calculus of constructions are relevant to maths formalisation? eg what would the benefits to be to big formalisation projects is lean supported CTT?
English
3
1
11
1.4K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Beam engines started the revolution; thermodynamics and precision engineering scaled it. LLMs will kick off a lot, but scaling long-agentic workflows needs formalized domains + a precise, machine-checkable proof assistant alongside the model. DM us to set up a call to see the demo.
Paul Graham@paulg

Prediction: LLMs in their current form may not be able to do everything, but AI now has enough momentum that this won't matter. Beam engines couldn't do everything either, but they were enough to set off the Industrial Revolution.

English
0
2
10
3.9K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
California’s SB-53 calls for transparency frameworks describing how AI developers apply standards and best practices. At FormalFoundry, we explore ways to translate such frameworks into formal, machine-readable specifications - so compliance can be proven, not just declared in a PDF. Our aim is to make “trust but verify” an engineering property, not a policy slogan.
Governor Gavin Newsom@CAgovernor

Today, I'm signing legislation to install common-sense guardrails ensuring the safety and further development of cutting-edge AI systems. California is proving that it's possible to both protect people and ensure our state's growing industries continue to shape the future.

English
0
1
8
727
Yaron (Ron) Minsky
Yaron (Ron) Minsky@yminsky·
Does anyone know of an experimental study of the efficacy of AI agents when working with statically typed vs dynamically typed codebases and languages? I'd be very curious to see results in this space!
English
17
6
75
6.8K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
@Milipol_Paris Looking forward to being there! We'll be at Milipol Lab (Hall 4) showing how we're making AI trust verifiable - bridging the gap between human reasoning and mathematical certainty. Don't miss our live talk at Innov’Arena - 19 Nov, 13:00. Let's connect!
English
0
0
3
48
Milipol Paris
Milipol Paris@Milipol_Paris·
A month to go before #MilipolParis, Discover what's on the 2025 programme 🕵️ ⤵️ The number of companies exhibiting at Milipol this year will be the highest in its history. Don't miss the event, request your free pass now and join us from 18 to 21 Nov ➡️ tinyurl.com/4mhdsxpt
Milipol Paris tweet media
English
1
0
2
300
FormalFoundry.ai がリツイート
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Absolutely - but only if the logic exists in the first place. You can’t verify what you can’t formalize. Today’s safety stack runs on layers: filters, red teams, guardrails, runtime monitors. All useful - but they still rely on human phrasing, prompts, or policy docs written in English. And that’s the bottleneck: ambiguity. Before an AI can be verified, someone has to express what “safe” means in precise, mathematical terms. We’re building the tool that creates those definitions - a bridge between expert intent and proof assistants, the math engines that can actually check if logic holds. Our system takes a rule stated in plain language, turns it into machine-checkable logic, verifies it, and reads back the confirmed meaning in clear text. Every company building AI safety infrastructure will eventually need this capability. Because without formalization, “logical consistency” is just hope with a good vocabulary. We’re building the machinery that makes it real. First demos are coming - not slides, not slogans, but proofs running live. The next era of AI safety won’t be about intuition. It’ll be about math that checks out.
English
2
3
11
4.2K
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Spot on - human communication is a beautifully messy tapestry of signals, often defying clean capture. As someone who's navigated those challenges firsthand (respect for sharing the Asperger's lens), you highlight why mimicking the full spectrum isn't our goal. We're zeroing in on concrete domains where rules can be pinned down: think compliance policies, ethical invariants, or operational constraints in high-stakes Al. These aren't the fluid art of conversation; they're the guardrails that must hold firm, expressed in logic that proof assistants can interrogate for contradictions. By formalizing just these - starting with language but extensible to structured inputs - we create verifiable anchors. Al stays sane not by copying our intuition, but by adhering to math we humans have vetted.
English
0
0
0
51
Steve Knapp
Steve Knapp@divergentSteve·
@FormalFoundry @elonmusk Language is merely the framework upon which communication builds Humans communicate with words, body, intonation, eye movements, knowledge of each other and even telepathy. Good luck copying us! Having Asperger's... I understand that even humans don't understand humans!
English
1
0
1
81
FormalFoundry.ai がリツイート
Elon Musk
Elon Musk@elonmusk·
Logical consistency is essential to the sanity of AI
English
6K
4.2K
62.1K
14.4M
Chris
Chris@Eman_Resuym·
@FormalFoundry @elonmusk The initial premise, so to speak. I happen to be a believer in objective truth, though, and we are pretty rare in a post-modern, subjective world.
English
1
0
1
52
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
A symbolic DSL where each sense gets its own name can help. The good news is that proof assistants already give us a rich way to pin down meaning - far beyond just picking word variants - and tie it to context, roles, timing, and “never/always” rules. But you’re right: that’s exactly why formalization isn’t straightforward. Someone has to face the ambiguities and decide. We’re up for that work. Our approach is to structure it: let AI take a first pass at disambiguation, keep domain experts in the driver’s seat for the hard calls, and record every decision so the safety layer has a clear, auditable history. If you’re curious, we can show a short POC - early but real - and compare notes on where the rough edges are.
English
0
0
3
68
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Totally fair challenge - and we don’t want to hand-wave it away. Human language is ambiguous by design; that ambiguity carries context, tacit knowledge, and institutional nuance. Turning that into something a machine can act on is hard, sometimes uncomfortable work. It’s also the exact place where today’s safety stacks get brittle: if the premise stays informal, “logical consistency” can just mean the system reliably repeats a misunderstanding. We’re not arguing to deprecate human language. We’re arguing to deprecate informality at the machine boundary. Keep natural language for intent; require mathematics for execution. What that looks like in practice: An expert states a rule in plain language (e.g., “no self-approval for high-risk actions”). Our tool proposes a formal specification, asks for clarifications where the language is underspecified, and checks the result in a proof assistant. Only definitions that type-check become part of the model. The system then reads the verified meaning back in clear text so non-specialists can confirm it says what they intended, with a full audit trail (intent → spec → proof → read-back). Why we think this is worth the effort now: The upside is structural: once the premise is formal, every downstream verifier/monitor/policy engine operates on precise, durable semantics instead of prose. Neuro-symbolic methods are coming; they’ll need explicit, machine-checkable premises to reason over. Proof assistants provide the expressivity and rigor to encode real constraints (dependent types, invariants, totality) without hand-waving. On cost and practicality: historically, formalization was slow and specialist-only. We believe we can make the formalization step ~2 orders of magnitude cheaper and faster in targeted domains by keeping humans in the loop but compressing the loop - propose, disambiguate, prove, read-back - instead of weeks of back-and-forth. This still requires commitment from domain experts; it just turns their expertise into artifacts the rest of the safety stack can actually trust. If you’re curious, we’re happy to walk you through what we have today - fair warning: these are proof-of-concepts, not polished products, and we’re keen to show where they already work and where they still break.
English
1
0
2
73
Steve Knapp
Steve Knapp@divergentSteve·
@FormalFoundry @elonmusk Your greatest challenge: At its very best... Human language is ambiguous! Make a list of words, in any language, that have one and only one possible interpretation... Short list. Human language is not precise! Computer language is precise. Which language shall you deprecate?
English
2
1
2
121
FormalFoundry.ai
FormalFoundry.ai@FormalFoundry·
Totally fair point: reason chooses the premise; logic checks whether the steps follow. In engineering terms, reason defines what must be true, logic enforces that it stays true. Where today’s AI safety stack is strong on enforcement (guards, monitors, policy engines), it’s weak at the source: the premise is rarely made explicit, precise, and calculable. Without that, “logical consistency” can just mean consistently wrong. Our thesis is simple: before AI can reason with a premise, the premise has to be formalized. Not prose. Not a prompt. A machine-checkable invariant that states exactly what “safe” means in a given context (e.g., “no self-approval,” “two-person rule,” “PII never leaves domain X”). That’s the layer we work on. We capture expert intent, encode it as mathematics, and check it in a proof assistant (Agda today, designed to extend to Lean/Coq). Only definitions that type-check become part of the specification. Then we read the verified meaning back in clear language so non-specialists can confirm it says what they intended. The whole process is traceable: intent → formal spec → proof → human-readable confirmation. Why this matters now: we think neuro-symbolic methods are imminent. As they land, they’ll need expressive, verifiable premises to reason over—otherwise they inherit ambiguity from policy docs and prompts. Proof assistants are the safe bet here: they give us the expressive types, totality, and proofs needed to state rich constraints without hand-waving. Our value proposition: we’re not another runtime safety layer. We’re the foundation beneath all of them. By producing formal, auditable premises, we make verifiers, monitors, and policy engines actually operate on math, not metaphors. Think of it as CAD for reasoning: you sketch the rule in human terms, the system snaps it to mathematical geometry, we press “check,” and you get both a proof and a plain-language read-back. Now engineering and audit share the same source of truth. That’s how we reconcile reason and logic in practice: Reason → write down the premise precisely. Logic → prove the consequences reliably. Operations → run everything against a spec that can’t drift. Demos are coming—live captures of premise → formal spec → proof → read-back on real workflows. Our bet is clear: formalization is the missing step; math is the interface; and proof assistants are how we make “logical consistency” something you can prove, not just hope for.
English
1
0
3
72
Chris
Chris@Eman_Resuym·
@FormalFoundry @elonmusk This is all well and good, however, the difference between logic and reason is crucial to this discussion. Logic only cares about, 'If point A (starting point) is true, do points B, C and D follow (without violating the laws of logic)?' Reason asks whether first premise is valid
English
1
0
2
94