Datawizz AI
27 posts

Datawizz AI
@datawizzai
Datawizz helps companies transition to Specialized Language Models
Katılım Şubat 2025
5 Takip Edilen122 Takipçiler
Datawizz AI retweetledi
Datawizz AI retweetledi

After taking some time off post-Rapid, I'm excited to share what I’ve been up to since: @datawizzai! We’ve raised a $12.5M Seed led by @humancapital to make AI 10x cheaper, 2x more accurate and 15x faster by transitioning from LLMs to SLMs.
AI is eating the world. But unit economics are eating AI.
Looking at the fastest growing AI products, they all share two traits - growing fast, and painful inference bills. General-purpose LLMs are just too expensive to run. A big reason for that is we train LLMs to be good at everything - answer any question, be an expert on any topic. The big labs dub this "generalisation", but for real-world applications, it is unnecessary.
In reality - many AI applications need models to be experts in one thing - and do that thing extremely well. Your coding model doesn’t need to memorize ancient recipes for Garum sauce.
This is where Datawizz comes in - we sit between the AI applications and automatically create smaller (100x-1,000x) specialized models to handle specific aspects of your work. By focusing the model and combining industry-data in the distillation process - we end up with models that beat SOTA LLMs at a fraction of the cost.
We created Datawizz to make AI specialized and scalable. We’re early in the journey, but have already been able to save companies 90%+ on their inference bill and speed up their apps by 10x.
Excited to build better AI platforms? Join the Datawizz team (link in first comment)
English

Are OpenAI's newest models hallucinating more than before?
Hallucinations have always been one of the biggest issues plaguing AI deployment. It now seems that this problem is getting worse - not better - with newer AI models - especially powerful reasoning models.
The reality is hallucinations are not a bug of LLMs per-se - but rather a byproduct of their core structure. LLMs are statistical token prediction models - they are not built to generate "truth".
That means these hallucinations must be addressed at the application layer - in how we prompt AI, extract results and perform quickly check. We've put together a list of some approaches we have deployed with our customers to mitigate LLM hallucinations. Check out the link in the first comment!

English

We built Prompt Debloat to help visualize which tokens (words / parts of words) have the most (and least) impact on the LLM answers.
We use a technique called Token Ablation. How does it work?
At every step we remove a token, re-run the prompt and check how the model confidence changes (as measured by average output token logprobs). Removing important tokens dramatically changes the confidence. Removing bloat (like a "could you please") doesn't really change the confidence.
This is far from a fool-proof approach, but it provides a good first pass on which parts of your prompt matter and which don't.
Check it out here -- promptdebloat.datawizz.ai
English

@datawizzai Would love to have you on my podcast or nationwide TV show sometime at Techimpact.tv!
English

@adriankuleszo It was a tough call between these direction - but this variant does look 🔥
English

Another take on @datawizzai branding.
Our team loved this one, but they went with the last option we presented.
Sometimes, that happens - what you like might not always resonate with your client's vision.
This is exactly why we always create multiple strong concepts. Let them choose what aligns with their goals.
All the "rejected" options still make for great portfolio pieces. This one was too good not to share! 😁

English


