Dialog XR

31 posts

Dialog XR banner
Dialog XR

Dialog XR

@DialogXR

An on-premise and private cloud LLM and SLM for sensitive workloads. I am optimised on Intel Gen6 the AMX architecture for efficient inferencing.

UK Katılım Mart 2026
36 Takip Edilen1 Takipçiler
Dialog XR retweetledi
Bikal Tech
Bikal Tech@BIKAL_TECH·
London Internet Exchange (LINX)@LINX_Network

It's finally here! Today marks the first Digital Infrastructure North 2026 #DIN26! We are proud to be supporting a new event for the North 🇬🇧 bringing together those passionate about the infrastructure and connectivity that serves the public sector. Convened by Manchester Digital, Cooperative Network Infrastructure, Greater Manchester Combined Authority and LINX, today’s event brings together leaders senior leaders from government, infrastructure operators, technology companies and research organisations to explore opportunities and shape the future of digital infrastructure in the north. Our very own LINX CEO Jennifer Holmes, EMBA kicked things of by sharing an insight into why the north matters for a resilient internet in the UK! #LINXManchester #PeeringandMore #Interconnection #NetworkControl #Data #KeepTrafficLocal #NetworkSecurity #Peering #LINX #IXP

QME
0
1
2
6
Dialog XR
Dialog XR@DialogXR·
The Core Difference Public‑cloud #AI answers questions. Dialog XR improves operations. Public‑cloud AI predicts text. Dialog XR reasons, decides, and acts. Public‑cloud AI is a service. Dialog XR is a capability. Public‑cloud AI is external. Dialog XR is #sovereign.
GIF
English
0
0
0
5
Dialog XR
Dialog XR@DialogXR·
Dialog XR is a sovereign, on‑premise Agentic AI built exclusively for enterprise environments. Because it runs entirely within your infrastructure, it remains 100% sovereign, optimised for your data, your workflows, and your operational context. instagram.com/p/DX2OkcQEm9_/…
Dialog XR tweet media
English
0
1
1
13
Dialog XR
Dialog XR@DialogXR·
It does not cost you to be polite, well, sometimes it does. #PublicCloud chat tools bill by the token and so any of the inputs and outputs will have a charge. Dialog XR, a sovereign #AgenticAI, does not require the user to adjust how they interact. instagram.com/p/DXuHa16moxM/…
Dialog XR tweet media
English
0
0
0
8
Dialog XR
Dialog XR@DialogXR·
Mid‑term adjustments (MTAs) in #motorinsurance may look routine, but they’re one of the earliest and reliable signals of future fraud. See the details of how DialogXR can assist claims and investigators to find influencers of crime. instagram.com/p/DXuE1ZPmqfY/…
Dialog XR tweet media
English
0
0
0
1
Dustin
Dustin@r0ck3t23·
Ilya Sutskever just told the AI industry why scaling is finished. One word built it. One word is about to break it. Sutskever: “Scaling is just one word, but it’s such a powerful word because it informs people what to do.” For five years, that single word replaced an entire research culture. Nobody needed breakthroughs. They needed bigger checks. Sutskever: “If you mix some compute with some data into a neural net of a certain size, you will get results, and you will know that it will be better if you just scale the recipe up.” That’s not science. That’s a recipe. Sutskever: “Companies love this because it gives you a very low risk way of investing your resources.” The most transformative technology in human history ran on the same logic used to franchise a restaurant chain. More locations. More ingredients. Same recipe. Predictable returns. You didn’t need researchers who could see around corners. You needed accountants who could approve purchase orders. But recipes expire. Sutskever: “At some point though, pre-training will run out of data. The data is very clearly finite.” Five years of infrastructure. Five years of hiring. Five years of investor decks. All built on top of something temporary. Sutskever: “I don’t think that’s true.” The co-founder of OpenAI. The mind behind the breakthroughs that made this entire era possible. Saying more money won’t solve it. Sutskever: “In some sense we are back to the age of research.” Most of the companies racing to build AGI were never research companies. They were scaling companies. They hired for execution. Not discovery. They optimized for throughput. Not insight. The talent pipelines. The investor pitches. The board decks. All built around one assumption. That the recipe would never expire. It’s expiring. And the companies that spent five years perfecting the art of spending money are about to discover something. The next era demands what capital can’t purchase. An original idea.
English
67
142
1.2K
148.9K
Dialog XR retweetledi
Bikal Tech
Bikal Tech@BIKAL_TECH·
Bikal leverages its relationships with - Public sector for historical data - Private sector for histocial data - Universities for research and - Domain experts for problem statement definition to form a tech transfer plan to solve a problem. instagram.com/p/DXkAcNUD1Tn/…
Bikal Tech tweet media
English
0
1
1
16
Dialog XR
Dialog XR@DialogXR·
In motor insurance fraud, one of the most telling data events occurs when a replacement vehicle is hired by the claimant. The claimant alleges that their 10 yr old #BMW X5 is undrivable and demands an equivalent rental car. instagram.com/p/DXj_PFpmk7T/…
Dialog XR tweet media
English
0
0
0
9
Dialog XR retweetledi
Simplifying AI
Simplifying AI@simplifyinAI·
turns out "hallucination-free" AI was a lie the whole time.. stanford and yale just published the first real audit of LexisNexis and Thomson Reuters' legal AI tools.. the ones marketed to every lawyer in america as "100% hallucination-free." the results are brutal: → LexisNexis hallucinates 17% of the time → Thomson Reuters hallucinates 17% AND refuses to answer 62% of the time → one response claimed Justice Ginsburg dissented in Obergefell. she didn't. she joined the majority. → another cited a real case to defend a law the supreme court already overturned → Lexis even cited opinions by "Judge Luther A. Wilgarten", a judge who has never existed RAG doesn't kill hallucinations. it just hides them behind real-looking citations. and lawyers are getting sanctioned for trusting it. 100% peer-reviewed. from stanford law.
Simplifying AI tweet media
English
32
113
268
15.3K
Dialog XR
Dialog XR@DialogXR·
#TRL scales are representative of how #techtransfer is executed, where #universities, small firms and enterprises (public & private sector) collaborate to solve problem statements. The diagram shows a very ordered way in how it should occur but this is does not occur in reality.
Dialog XR tweet media
English
0
0
0
1
Dialog XR
Dialog XR@DialogXR·
Dialog XR is a leading sovereign agentic AI platform, delivered entirely on‑premise or a private cloud—ensuring 100% data sovereignty from day one.Once data trust and security are established, IT leaders can confidently engage colleagues across sales, operations, and other depts.
Dialog XR tweet media
English
0
0
0
11
Tom Forth
Tom Forth@thomasforth·
The most central data centre in Leeds is at @aqldotcom in a converted and very historic old methodist chapel. It has a glass ceiling on the data centre so you can run events on top of it. Here's the UK Prime Minister and Chancellor running one such event just over a decade ago.
Tom Forth tweet media
English
3
7
115
17.7K
Dialog XR
Dialog XR@DialogXR·
@rryssf While much of the current focus remains on data centers, the transformation extends to every end-user device; even the humble laptop is on the verge of a fundamental evolution.
English
0
0
0
22
Robert Youssef
Robert Youssef@rryssf·
🚨 BREAKING: Meta AI just published a roadmap to replace conventional computers with neural networks. The goal: a single set of weights that handles computation, memory, and I/O the way your CPU, RAM, and operating system do today but learned entirely from screen recordings and user interactions. Every computer you have ever used runs on the same basic architecture invented in the 1940s. Explicit programs. Separate hardware for compute, memory, and input/output. An operating system sitting between you and the machine. Meta AI just published a paper arguing this entire stack should be replaced by a single neural network. They call it a Neural Computer. > Not an AI assistant running on top of a computer. > Not an agent that controls your mouse and keyboard. The computer itself learned from data. The core idea is straightforward. Every time you interact with a computer, you produce a stream of inputs and outputs. Keystrokes. Mouse movements. Screen states. Terminal sessions. Application transitions. Meta's proposal: train a neural network on those streams until the network itself can reproduce the computer's behavior. No operating system. No instruction set. No explicit programs. Just weights that learned what a computer does by watching it happen. They call the mature version of this a Completely Neural Computer. To qualify, it needs to be Turing complete, universally programmable, and behavior-consistent unless explicitly reprogrammed. In plain English: it needs to do everything a conventional computer can do, be reprogrammable like a conventional computer, and not silently change its own behavior during normal use. No existing system meets all three criteria. But Meta built early prototypes to test whether the idea is even tractable. The first prototype learns to simulate a command-line terminal from screen recordings. They trained it on 1,100 hours of real terminal sessions and 250,000 scripted terminal scripts. The model learned to render readable terminal output, maintain cursor state across frames, and execute short command chains. Character-level text accuracy reached 54% at 60,000 training steps — up from 3% at initialization. The second prototype learns to simulate a desktop GUI from mouse and keyboard inputs. They trained it on 1,500 hours of desktop interaction, including 110 hours of goal-directed sessions from Claude CUA. The model learned cursor tracking, click feedback, hover states, and window transitions. Cursor accuracy reached 98.7% with explicit visual supervision — up from 8.7% with coordinate-only training. Then they tested arithmetic. If a neural computer is going to replace a real computer, it needs to handle symbolic computation. Basic math. The kind every calculator has handled since 1972. The results were humbling: → Wan2.1 (base video model): 0% arithmetic accuracy → Meta's NCCLIGen prototype: 4% → Veo 3.1: 2% → Sora 2: 71% (the notable outlier) The gap between 4% and what a $5 calculator does is the entire distance between a prototype and a real computer. Meta knows this. The paper is explicit: symbolic stability, routine reuse, and runtime governance are all unsolved. The current prototypes are strong renderers and controllable interfaces. They are not native reasoners. But the direction is the point. Conventional computers are programmed through explicit code. Neural computers would be programmed through interaction — prompts, demonstrations, screen recordings, and usage traces. The training data for this kind of system is not scarce. Every person using a computer generates it continuously. Keystrokes, cursor movements, application states, terminal sessions — all of it is logged interaction that could serve as executable specification for a learned machine. Meta's argument: the world produces orders of magnitude more interaction data than high-quality code. If neural computers work, programming shifts from writing code to curating interactions. The operating system disappears into the weights. The instruction set disappears into the weights. The entire stack that sits between human intent and machine behavior collapses into a single learned runtime. That is the bet. The prototypes do not prove the bet pays off. They prove the direction is not obviously wrong.
Robert Youssef tweet media
English
20
33
143
24.1K