Ksenia_TuringPost

19.1K posts

Ksenia_TuringPost banner
Ksenia_TuringPost

Ksenia_TuringPost

@TheTuringPost

Mom of 5, exploring AI&ML. From ML history to building with AI. aka @kseniase_ Know what you are talking about👇🏼

Join over 102,000 readers Katılım Haziran 2020
11.1K Takip Edilen84.9K Takipçiler
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
OpenAI Agents SDK – an open orchestration layer for building multi-agent workflows It lets you define agents as LLMs with instructions, tools (APIs, functions, external systems), guardrails, and supports: • sessions with conversation history management • human-in-the-loop • tracing to monitor and debug how agents make decisions • voice agents An interesting feature is Sandbox Agents, which operate in a controlled environment with access to files, code repos and terminal commands. You can plug in 100+ models including open ones (but you may need an adapter layer)
Ksenia_TuringPost tweet media
English
5
10
38
4.9K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
To really understand embeddings, you need a few core ideas: - vectors and dimensions - dense vs sparse representations - vector and embedding spaces - what latent space means - semantic similarity importance - and how embeddings are formed These concepts completely change and explain how meaning is represented in AI. Full guide (including RoPE): turingpost.com/p/embeddings
Ksenia_TuringPost tweet media
English
2
42
245
8.7K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
9 New approaches to Multi-Agent Systems ▪️ RecursiveMAS ▪️ OneManCompany (OMC) ▪️ OrgAgent ▪️ CORAL ▪️ LLMA-Mem ▪️ Agentic Federated Learning ▪️ CASCADE ▪️ GRASP ▪️ Reinforced Agent These methods express truly interesting various ideas! Learn more about them here: turingpost.com/p/9masmethods
Ksenia_TuringPost tweet media
English
3
50
272
13.2K
Ksenia_TuringPost retweetledi
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
Agents are already moving at machine speed, but security is still stuck on static, outdated rules. We can close the gap On May 5, @rubrikInc is hosting a technical webinar on AI security at scale – Building AI Resilience: Managing Agent Risk with Trust Infrastructure → bit.ly/4cZcbI3 They’ll cover: • Where security actually breaks: algorithms + human factors • 3 pillars of trustworthy AI ops • How trust infrastructure works in practice (Rubrik Agent Cloud) • A real financial services case where static rules fail Register now: bit.ly/4cZcbI3
Ksenia_TuringPost tweet media
English
1
3
13
4.5K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
AI is our chance to finally understand how students think, where they get stuck and how to help them learn Neeru Khosla, co-founder of @CK12Foundation, in our interview Watch the full video to find out how AI can change education ->
English
1
4
16
2.3K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
There’s a serious gap in multimodal models – they work with images, but still reason in language, which isn’t that precise for visual stuff. @deepseek_ai just dropped an idea to solve this: let the model literally point to exact locations in the image while it thinks. They call it "Thinking with Visual Primitives." These visual primitives are: - points (specific locations) - bounding boxes (areas in the image) Using them, the model knows what exactly it’s referring to and achieves ~77% better accuracy on average (vs. Gemini 3 Flash's 76.5% and 71.1% for GPT-5.4) Plus, only ~80–90 visual tokens are kept in memory after compression thanks to the efficient architecture Here is how it works:
Ksenia_TuringPost tweet media
English
11
78
499
30.6K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
5. By combining pointing + reasoning, the model improves both quality and efficiency: ~77% accuracy on average vs. Gemini 3 Flash's 76.5% and 71.1% for GPT-5.4 ~289 tokens per image (~2–4× fewer than other models) fed into the model ~80–90 tokens kept in memory after compression thanks to a more efficient architecture. So this is all about better reasoning with less visual data, because the model knows what exactly it’s referring to.
Ksenia_TuringPost tweet media
English
1
0
6
1.2K
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
4. Training was also an important part They trained two separate specialists (boxes + points) → refined them with reinforcement learning → merged both experts into one model + distilled their behavior into it.
English
1
0
7
785