Connected Data

5.9K posts

Connected Data banner
Connected Data

Connected Data

@Connected_Data

Connecting Data, People & Ideas since 2016. Using relationships, meaning, context in Data to achieve great things #KnowledgeGraph #GraphDB #AI #SemTech

The World Katılım Nisan 2016
2.5K Takip Edilen6.6K Takipçiler
Sabitlenmiş Tweet
Connected Data
Connected Data@Connected_Data·
🚀 The wait is over! The Call for Submissions for #CDL26 is NOW OPEN. Be a part of the celebration: 10 Years Connecting Data, People and Ideas The leading global technology conference for those using Relationships, Meaning, and Context in Data to achieve great things.  Join us in the heart of London as we celebrate a decade of innovation in Knowledge Graphs, Graph Analytics, Data Science, AI, Graph Databases, Semantic Tech and Ontology this November. Share your use cases and breakthroughs. Submissions are open across 2 areas: Presentations: Real world use cases and innovative approaches across 3 tracks: Nodes, focus on use cases, Edges, focus on innovation, Educational, focus on applications. Masterclasses:  Hands-on tutorials in which instructors teach attendees skills they can use in their daily work. Why Speak at CDL26? Global Platform: Join 350+ luminaries who have graced our stage and reach our ever-growing global audience of thousands. Adoption and Innovation: From the resurgence of Ontologies to the cutting edge of Agentic AI and Context Graphs. Speaker Benefits: Free event pass, speaker guidance, and exclusive network discounts. 📅 Deadline: Aug 31 ✅  Notification of Acceptance - September 14, 2026  Topics of interest and submission guidelines here: 🔗 connected-data.london/2026-call-for-… #ConnectedData #KnowledgeGraphs #DataScience #AI #GraphDB #Analytics #SemTech #EmergingTech
Connected Data tweet media
English
0
0
2
206
Connected Data
Connected Data@Connected_Data·
The Context Layer: Knowledge Graph's Second Act Knowledge graphs have always been about one thing: making meaning machine-readable. Entities, relationships, definitions, rules, and the exceptions to those rules. That body of work, long treated as unglamorous infrastructure, turns out to be exactly what enterprise AI is missing Models are getting exponentially smarter, but not exponentially more useful. That gap is the whole problem. Performance = Intelligence x Context. It is multiplicative for a reason. A context score of zero makes performance zero, regardless of how capable the model is. And a smarter model operating on fractured context does not produce fewer errors. It produces more elaborate, more persuasive, more dangerous errors. Over the last decade, intelligence advanced by roughly three orders of magnitude. Context barely moved. It remained locked in dashboards, employee knowledge, and the heads of people who have been at the company long enough to know where the bodies are buried. You can buy intelligence at API prices. You cannot buy context at any price. Every AI agent your organization has deployed arrived with extraordinary capability and no context. It knows everything in general and nothing about your business specifically. It has never shadowed anyone. It does not know the unwritten rules. It has never been corrected or coached. It is Day 1. Every single time. The context layer is the organizational living brain. It sits between your data estate and your AI agents, encoding what every experienced employee accumulates over months: metrics and their definitions, concepts and the rules that govern them, entities and how they relate across systems, and the judgment calls embedded in real operations. Three walls get in the way. Building the agent takes five minutes; adding business context takes five months. Agents do not share their learnings, so every new agent starts from zero. And when multiple agents work the same domain, their conflicting definitions of "revenue" or "active customer" surface at the seam, confidently, persuasively, and wrongly. But four things compound once you get past the walls. Business operations are already encoded in your systems and modern AI can bootstrap a semantic map in hours, not months. Context quality compounds, with each human refinement making the next generation better. Every agent interaction and correction is signal, and the enterprises capturing it are building a moat nobody outside the company can replicate. And context, like code, requires lifecycle management: versioned, tested, governed, or it decays silently. Context will become as foundational as content was during the internet era. Right now it feels optional. Within three years it will feel urgent. Within five, no enterprise AI strategy runs without it. The knowledge graph community has been doing this work for decades: ontologies, taxonomies, semantic models, entity resolution, lineage. The discipline of asking what things actually mean, who decides, and how that meaning travels across systems. Every frontier lab is, in some form, now building toward what knowledge graph practitioners have been telling enterprises to build all along. The fundamental decisions about what "customer" means, what "revenue" means, what a "decision" means, are about to be made for every enterprise in the world. This will be done either by the people who have spent their careers thinking carefully about meaning, or by the people who are just now discovering its value. The knowledge graph community is best positioned to lead. It should refuse to become a historical footnote. By @Prukalpa metadataweekly.substack.com/p/the-context-… #KnowledgeGraphs #ContextEngineering #EnterpriseAI #SemanticLayer #DataStrategy -- Connected Data London 2026 | 11–12 November | Leonardo Royal Hotel London Tower Bridge 🎤 Share your work with the world's most passionate data community. The Call for Submissions is open. connected-data.london/2026-call-for-… 🎟 Tickets on sale now. Early bird discounts up to 30%. 2026.connected-data.london/?utm_source=tw… 📺 Sponsorship opportunities available. Contact info@connected-data.london for details. #KnowledgeGraph #GraphRAG #Ontology #Graph #AI #DataScience #GraphDB #SemTech
Connected Data tweet media
English
1
1
16
691
Connected Data retweetledi
The Year of the Graph
The Memory Layer: Knowledge Graphs, AI Agents, and the 48-Hour Tool That Caught Elon's Eye AI agents forget everything the moment the context window closes. This book is about fixing that. The Memory Layer teaches you how to build persistent, structured memory for AI systems using knowledge graphs, the data structure that compresses a five-million-token codebase into 176,000 tokens without losing what matters. What you will learn • Why binary graphs lose the signal that matters, and how weighted, typed, directed hyperedges fix it • How the Leiden algorithm finds communities your file system never told you existed • How to compress an entire codebase into a single context window at a 28:1 ratio • How to build a confidence firewall so AI agents know which edges are steel and which are rope • How to wire a knowledge graph to Claude, Cursor, or any MCP client as a native tool • Why neurosymbolic AI is not a compromise, it is the architecture Who this is for • Software engineers who want their AI tools to actually remember the codebase • ML engineers building agents that need reliable, structured context • Technical leaders evaluating GraphRAG, LightRAG, and HybridRAG • Anyone who has watched an LLM hallucinate a function that does not exist Graph theory from Euler to PageRank. Ontologies and the Semantic Web. The modern AI stack. Six graph databases compared by the wall each one breaks through. Graphify’s architecture, the confidence system, the multimodal pipeline, the compression arithmetic. A full build walkthrough from raw text to Neo4j to MCP server. Neurosymbolic AI. The future of Software 3.0. By Safi Shamsi, creator of Graphify safishamsi.gumroad.com/l/qetvlo Graphify is the open-source tool this book is built around. It reads code, docs, PDFs, images, and audio and wires them into one queryable knowledge graph. It runs entirely locally. Nothing leaves your machine. #OpenSource #Book #Release #Analytics #GraphTheory #Ontology #MachineLearning #GraphRAG #EmergingTech -- Join the Conversation Subscribe to the Year of the Graph newsletter for quarterly insights on #KnowledgeGraphs, #GraphDB, Graph #Analytics, #AI, #DataScience and #SemTech . 📧 Subscribe: yearofthegraph.xyz/newsletter  💼 Sponsorship inquiries: yearofthegraph.xyz/contact/
The Year of the Graph tweet media
English
0
1
19
699
Connected Data
Connected Data@Connected_Data·
Revolutionizing Customer Engagement Through Graph RAG Systems This presentation provides an overview of the numerous innovative applications of generative AI, which are opened up in particular through the use of Retrieval Augmented Generation (RAG). In particular, the pitfalls and risks that are to be addressed with RAG will be discussed. At the centre of the presentation is Semantic RAG, which can eliminate numerous disadvantages of conventional RAG architectures. We show how Semantic RAG can be used to implement personalised, user-friendly dialogue systems even in business-critical processes. In this presentation, we compare Semantic RAG with other GraphRAG approaches and discuss the advantages and the underlying methodology using examples from customer support, knowledge management and ESG (Environmental, Social, Governance). youtu.be/hwJh0EAGVZ0?ut… -- Andreas Blumauer. CEO, Co-founder, Semantic Web Company (SWC) Andreas Blumauer is CEO and co-founder of Semantic Web Company (SWC), the provider and developer of the PoolParty Semantic Platform. -- Welcome to Connected Data London's #ThrowbackThursday Every Thursday at 3pm GMT, we are releasing gems from our vault on #YouTube Tune in and learn from leaders and innovators; subscribe to our channel and watch premieres as they are released!  #knowledgegraph #graphdatabase #graph #AI #datascience #analytics #semtech #ontology
YouTube video
YouTube
English
0
1
3
297
Connected Data
Connected Data@Connected_Data·
Can we trust ontologies generated by LLMs? Large Language Models are becoming powerful assistants for Knowledge Graph and Ontology Engineering. But when they generate ontologies, they can also introduce subtle — and sometimes critical — modeling mistakes. "Pitfalls in AI-Generated Ontologies: Strategies for Detection and Mitigation" discusses how to move from enthusiasm to reliability when using LLMs for ontology engineering, with two concrete contributions: Ontology Pitfalls Detector, a new open-source tool that detects mistakes in LLM-generated ontologies. Ontology Toolkit by Lettria, a platform for automatically building ontologies from documents — while avoiding common ontology design mistakes. Pitfall Detection evaluation library: * tailored to LLM-generated ontologies * structural, logical, naming, and semantic issues * complementary to existing ones Input * OWL/RDF ontology Detection techniques * SPARQL queries * Hierarchy analysis * Semantic similarity SBERT Distance over WordNet * LLM-as-a-judge Ontology Toolkit(generation pipeline) Goal: Automatically generate high-quality ontologies from unstructured text Approach * Deterministic, multi-stage pipeline * Each stage: Produces structured intermediate outputs Applies strict validation + correction loops Core Principles * Use-case driven extraction (focus on relevant concepts) * Explicit semantics (no implicit or ambiguous modeling) * Controlled hierarchy construction (avoid flat or noisy structures) * Logical enrichment with OWL axioms By Raphael Troncy Pasquale Lisena, Julien PLU, Oscar Moreno Escobar and Edouard Trouillez EURECOM Lettria Ontology Pitfalls Detector: github.com/D2KLab/Ontolog… Ontology Toolkit: perseus.lettria.com Presentation: docs.google.com/presentation/d… #LLMs #OntologyEngineering #SemanticWeb #GenerativeAI #ResponsibleAI #KnowledgeRepresentation #EmergingTech -- Connected Data London 2026 | 11–12 November | Leonardo Royal Hotel London Tower Bridge 🎤 Share your work with the world's most passionate data community. The Call for Submissions is open. connected-data.london/2026-call-for-… 🎟 Tickets on sale now. Early bird discounts up to 30%. 2026.connected-data.london 📺 Sponsorship opportunities available. Contact info@connected-data.london for details. #KnowledgeGraph #GraphRAG #Ontology #Graph #AI #DataScience #GraphDB #SemTech
Connected Data tweet media
English
0
1
13
648
Connected Data retweetledi
The Year of the Graph
From Retrieval Augmented Generation to Knowledge Augmented Generation Using ontology in Retrieval Augmented Generation (RAG) is getting traction. Sergey Vasiliev labels this family of approaches KAG: Knowledge Augmented Generation. Rather than only improving retrieval, the aim is to integrate a knowledge graph as a reasoning substrate. In this view, the graph is not merely a retriever index but a semantic backbone. In “Enhancing HippoRAG with Graph-Based Semantics“, a team from Graphwise show how an ontology-based knowledge graph boosts the multi-hop Q&A accuracy of a leading schemaless GraphRAG system. Replacing generic graph construction with strict ontologies and structured knowledge graphs transforms HippoRAG from an associative engine into a reasoning engine. Granter research compared a variety of approaches: standard vector-based RAG, GraphRAG, and retrieval over knowledge graphs built from ontologies derived either from relational databases or textual corpora. Results show that ontology-guided knowledge graphs incorporating chunk information achieve competitive performance with state-of-the-art frameworks. That’s not to say that other RAG and GraphRAG approaches have gone away. Raphaël MANSUY elaborates on why classic RAG doesn’t work and what to do about it, as a preamble to introducing EdgeQuake: a high performance open source Graph-RAG framework in Rust. MegaRAG automatically builds knowledge graphs from visual documents. And Graphcore Research published UltRAG: a Universal Simple Scalable Recipe for Knowledge Graph RAG. A group of Chinese researchers published a survey of Graph Retrieval-Augmented Generation. A systematic survey of GraphRAG, with workflow formalization, downstream tasks, application domains, evaluation methodologies, industrial use cases, and an open source repository. Google published a guide to building GraphRAG agents with Google’s Agent Development Kit. This hands-on tutorial demonstrates how to create intelligent agents that understand data context through graph relationships and deliver highly accurate query responses. Steve Hedden explores the rise of context engineering and semantic layers for Agentic AI. He notes that RAG may have been necessary for the first wave of enterprise AI, but it’s evolving into something larger. Neo4j’s Alex Gilmore wrote the Text2Cypher Guide, elaborating on when and how to implement Text2Cypher in agentic applications. -- 📩 Excerpt from The Year of the Graph Spring 2026 newsletter Read "Beyond Context Graphs: How Ontology, Semantics, and Knowledge Graphs Define Context" with more sections, references and attribution here 👇 yearofthegraph.xyz/newsletter/202… All things #KnowledgeGraph, #GraphDB, Graph #Analytics / #DataScience / #AI and #SemTech.
The Year of the Graph tweet media
English
1
7
20
852
Connected Data
Connected Data@Connected_Data·
AI and Knowledge Graphs: A Mutually Beneficial Relationship. 🤝 While LLMs excel at reasoning and natural language, they often struggle with factual consistency and domain-specific truth. This is where Knowledge Graphs step in—acting as the essential "truth layer" for GenAI. For #CDL26, we want to see how you are building the next generation of AI systems. We are looking for submissions on: Graph RAG: Real-world lessons, variants, and Knowledge Augmented Generation (KAG).  Graphs and Agents: Powering agent workflows and providing agentic memory. Neuro-symbolic AI: Combining rule-based reasoning with machine learning. Graph Learning: Use cases for GNNs and Graph Foundation models. If you are bridging the gap between symbolic logic and neural networks, we want you on our stage in London. 📝 Submit your proposal: connected-data.london/2026-call-for-… #ConnectedData #KnowledgeGraphs #DataScience #AI #GraphDB #Analytics #SemTech #EmergingTech #GenAI #AgenticAI #GraphRAG #NLP #CDL26
Connected Data tweet media
English
0
0
3
97
Connected Data
Connected Data@Connected_Data·
Do You Need An Upper Ontology? Picture a reasonably sharp engineer. They've been told to build an ontology. They've installed Protégé, loaded BFO, and are now staring at Continuant and Occurrent, wondering what any of this has to do with their product catalogue or supply chain graph. This is not a failure of intelligence. It is a failure of framing. Kurt Cagle makes a pointed case: for most projects, you do not need an upper ontology. Choosing one without understanding what you are actually buying into may make your problem significantly worse. Why? Because an upper ontology is not a neutral foundation. It is a methodology in disguise -- encoding specific philosophical commitments about what kinds of things exist, how change is modelled, and how relationships are typed. The argument cuts deep in several directions: Every extension is a fork. The moment you create a new class or property specific to your domain, you have amended the contract. And you will always fork it. Most organisations that claim to use OWL do not, in any meaningful sense, use the reasoner. What they actually have is a knowledge graph with some rdf:type declarations -- a UML diagram that happens to serialise to Turtle. SHACL handles everything most teams are actually doing, with explicit operational semantics. The bootstrapping argument is now broken. It is now possible to generate a robust domain ontology -- including SHACL shapes, property definitions, and class hierarchies -- in hours, not months. The "free partial model" an upper ontology provides is no longer saving you time. It may be costing you fit. And for AI systems specifically: most upper ontologies were not built with reification as a first-class concern. RDF-Star changes the epistemic unit from a triple to a contextualised, provenance-bearing claim. Named graphs dissolve the open/closed world binary by making world assumptions local to a graph context. Frameworks designed before these primitives were serious may actively obstruct the modelling patterns that AI systems require. The actual question is not which upper ontology to choose. It is: what problem are you solving, and at what scope? Upper ontologies are not wrong. They are answers to a specific question. Most organisations, most of the time, are not asking that question -- they have simply been told that they should be. ontologist.substack.com/p/do-you-need-… #Ontology #KnowledgeEngineering #SemanticWeb #RDF #SHACL -- Connected Data London 2026 has been announced! 11-12 November, Leonardo Royal Hotel London Tower Bridge 📝 connected-data.london/post/cdl-2026-… Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology 🎟 Ticket sales are open. Benefit from early bird prices with discounts up to 30%. 2026.connected-data.london 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Connected Data tweet media
English
0
0
4
132
Connected Data retweetledi
The Year of the Graph
SEMMweb: A new open source Ontology Editor Ten years ago, Semmtech built a tool for their own daily work. As early practitioners in Linked Data, they needed a way to work directly with ontologies - precise, structured, and aligned with standards like RDFS and OWL. That is how the SEMMweb Ontology Editor came to be. Now they're sharing it with the public. The goal: make Linked Data, Knowledge Graphs and Ontologies easier to work with. The SEMMweb Ontology Editor is a Windows desktop application that allows users to create, view, edit, import, and publish ontologies according to Linked Data principles. Multiple windows assist the user in understanding the contents of an ontology (or indeed of imported ontologies) and to add new data in the form of statements (or triples). This tool has been designed with data professionals in mind (e.g., ontologists, Linked Data engineers, Semantic Web specialists, researchers and academic users), although other users may well find it a valuable tool too. The various views available are designed to present the data in as understandable a manner possible for any given ontology. Addtionally, drag-and-drop functionality between the various windows simplifies the process of making additions. Supported file formats for ontologies are .ttl, .owl, .rdf, .nt, .n3. Whether you are a seasoned Knowledge Graph architect or a data analyst just starting out, this tool offers a solid foundation to: 📌 Learn Linked Data concepts through practical experience 📌 Explore the use of semantic technologies 📌 Keep Linked Data vocabularies and ontologies at your fingertips And for those who need to lower the technical barrier for domain experts, they also developed user-friendly applications that hide this complexity - such as the Laces Ontology Manager. Show&Tell: How to Use the Semmweb Ontology Editor. May 19 Webinar: watch.getcontrast.io/register/semmt… Github: github.com/semmtech/semmw… #LinkedData #Ontology #SemanticWeb #OWL #KnowledgeEngineering #OpenSource -- 🤝 Put your graph tech brand in front of the people who matter Your graph technology deserves to be seen by buyers, analysts, and builders who are actively shaping the space.     The Year of the Graph is the independent hub that this community trusts. Slots for the upcoming Summer 2026 Issue are filling fast. Reach out and book yours now 👇 yearofthegraph.xyz/contact/
The Year of the Graph tweet media
English
0
5
12
704
Connected Data
Connected Data@Connected_Data·
Utilizing AI to transform unstructured data from complex, asset-heavy organizations into structured knowledge graphs Organizations involved in the management of physical assets have to deal with vast collections of documents filled with design requirements, recommendations, and best practices. While these documents contain critical qualitative information, they often suffer from fragmented information retrieval, poor version control, limited ownership and insufficient sharing across the value chain.  To overcome this, knowledge graphs facilitate decomposition of each requirement or recommendation into atomic, uniquely identified items, enriched with metadata, interlinked, and related to an ontology. The requirements resulting in enriched text (Bazuin et al., 2023) enable precise retrieval, traceability and collaborative reuse. However, information contained by existing documents must be available as a knowledge graph first.  The challenge of transforming unstructured textual documents into high-quality linked data lies both in the efficient extraction & classification of information, and in the enrichment of the extracted information with metadata. By combining different models, we are able to extract requirements, classify them, and add other relevant metadata. This presentation showcases a validated implementation based on a language model (based on BERT), which is currently in use in the AECO (Architecture, Engineering and Construction) industry. The pipeline identifies and classifies requirements from unstructured documents, maps them to semantic models, enriches the data with the help of AI and after validation by domain experts, the resulting datasets are published to a Linked Data platform.  This model already supports the transformation from unstructured documents into structured data, but we will show our current efforts aiming to further automate the enrichment and refinement of these requirements: Refining requirements to ensure SMARTness (Specific, Measurable, Acchievable, Relevant, Time-bound). This reduces risks during the construction process itself. Linking content to domain-specific ontologies, subjects, or keywords, to enable easier querying. Connecting requirements to solutions, proposals, and lessons learned from other projects. The audience will learn how to combine AI models for creating and enriching knowledge graphs. We will see the application of this pipeline in the AECO industry. Additionally, attendees will get insight in our lessons learned when using AI. We will show how to select the most relevant model, how to keep the human-in-the-loop, and how to scope your approach.  A basic understanding of Linked Data and AI will be helpful for the audience, but not necessary to follow the presentation. We hope to inspire the audience in setting up their own ideas into practice. Link to talk - Creation, Population and Enrichment of Knowledge Graphs with the Help of AI: 2025.connected-data.london/talks/creation… -- Gulay Canbaloglu. Consultant, Semmtech Gulay Canbaloglu is a Consultant with a background in Computer Engineering and Human-Technology Interaction. She studied at Koç University and Technical University of Eindhoven, which are both prestigious universities. -- Welcome to Connected Data London's #TeaserTuesday Every Tuesday, we share teasers from #CDL25 on our channels Connected Data London 2025 brought together leaders and innovators. Were you there? 🎥 Watch the sessions: 2025.connected-data.london 📩 Join the community: connected-data.london Tune in and learn from leaders and innovators; subscribe and watch premieres as they are released!  Join community legends and new voices in #CDL25 for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology
English
0
1
3
193
Connected Data
Connected Data@Connected_Data·
Gartner Says Lack of Semantics Causes Inaccurate AI Agents and Wasted Spending Cutting Corners on Data Context and Semantic Foundations Will Increase Costs. Speaking at the Gartner Data & Analytics Summit in London, Rita Sallam, Distinguished VP Analyst at Gartner, said: "Agentic AI outcomes depend on context including semantic representations of data. Without context – a clear understanding of the specific relationships and rules within an organization's data – AI agents cannot operate accurately and are far more likely to hallucinate, introduce bias and produce unreliable results." "Organizations that fail to adopt comprehensive context structures — supported by a robust data layer — will perpetuate data inefficiencies and face heightened financial costs, as well as legal and reputational damage." Gartner predicts that by 2027, organizations that prioritize semantics in AI-ready data will increase their agentic AI accuracy by up to 80% and reduce costs by up to 60%. "Context with semantic coherence will become a cost-control and trust strategy, not a nice-to-have." This message will be familiar to anyone who has attended Connected Data London. The community has been championing exactly this - Relationships, Meaning, and Context in Data - since 2016. At CDL26 this November, speakers including William Tunstall-Pedoe (founder of the core technology behind Amazon Alexa, now building trustworthy neuro-symbolic AI), Juan Sequeda (Principal Fundamental Researcher, ServiceNow), Malcolm Hawker (CDO, Profisee), and Jessica Talisman (Semantic Architect) will go deep on the very foundations Gartner is now calling non-negotiable. The 2026 Call for Submissions is open. Topics of special interest include Knowledge Graphs and LLMs, GraphRAG, Agentic AI, Neuro-symbolic AI, Ontologies, and Semantic Technology - the building blocks of the context layer Gartner says organizations can no longer afford to ignore. If you are working on this, this is your community. gartner.com/en/newsroom/pr… #SemanticAI #AgenticAI #KnowledgeGraphs #DataGovernance #OntologyFirst -- Connected Data London 2026 has been announced! 11-12 November, Leonardo Royal Hotel London Tower Bridge 📝 connected-data.london/post/cdl-2026-… Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology 🎟 Ticket sales are open. Benefit from early bird prices with discounts up to 30%. 2026.connected-data.london 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Connected Data tweet media
English
0
3
7
215
Connected Data retweetledi
The Year of the Graph
Context Graph Architecture: Why Knowledge Architecture Is the Missing Layer Context graphs are being called AI's next trillion-dollar opportunity. But before chasing the new label, it's worth asking: what's actually new here? Forrester's Charles Betz cuts through the noise: EA has maintained entity graphs since Zachman (1987). CMDBs go back to ITIL v1 in the 1990s. APM, process mining, ChatOps, architecture decision records -- these disciplines have been assembling the pieces of a unified context graph in isolation for decades. The graph was never missing. It's fragmented. George Anadiotis takes the argument further. The decision trace layer -- who decided what, why, under what authority -- isn't absent from organisations. It lives in Slack threads, incident postmortems, Jira tickets, and people's heads. Extracting it and making it queryable is not a database problem. It requires knowledge engineering: observing work practices, interviewing domain experts, encoding tacit reasoning in formal, machine-readable representations. That's the missing layer. Not the graph itself -- the knowledge architecture that makes it governable. The infrastructure answer is not exotic either. RDF/OWL provides typed entities and governed relationships. Named graphs handle provenance and versioning. SPARQL enables queryability. These are the building blocks that turn an entity layer from a drawing into something that can actually satisfy governance requirements. Alberto D. Mendoza's conversion of ArchiMate 3.2 to an RDF ontology is a direct, working instantiation of this approach. On the tooling side: the LLM Wiki pattern -- extracting discrete facts from unstructured sources into a graph, then synthesising into structured queryable form -- is being adopted at scale as a population accelerator for enterprise Agentic AI implementations. The Semantic Web has a 25-year library of patterns, vocabularies and tools to build on. The key reframe: ontological modeling was never meant to be a runtime. Its value is in defining consistent logic aligned with domain knowledge -- ensuring concepts don't contradict each other across different data schemas. Entity graphs anchored in EA, EA anchored in knowledge representation, decision traces made queryable: that's context graph architecture grounded in something that can actually hold. The question isn't whether context graphs are real. It's whether organisations will start building the knowledge architecture they require now, or wait until their competitors have a three-year head start. By @linked_do linkeddataorchestration.com/2026/05/08/con… #KnowledgeArchitecture #EnterpriseArchitecture #ContextGraphs #AgenticAI #Ontology -- 💬 ‘A great newsletter’ - Claudia Remlinger, former Sr. Marketing Director, Neo4j.  Join readers from Amazon, Capgemini, Michelin, Neo4j & more Subscribe to the Year of the Graph newsletter for quarterly updates and insights on all things #KnowledgeGraph, #GraphDB, Graph #Analytics / #DataScience / #AI and #SemTech 👇 yearofthegraph.xyz/newsletter
The Year of the Graph tweet media
English
1
11
43
2K
Connected Data
Connected Data@Connected_Data·
📐Semantic Architecture is the New Data Infrastructure Without structure, data is just noise. We are pleased to announce @JessicaTalisman as a featured speaker for 2026. As the founder of The Ontology Pipeline, Jessica specialises in the semantic architectures that ensure data integrity and discoverability in complex environments. Join a decade of community expertise this November in London. Secure your Super Earlybird rate before 1 June: 👉 2026.connected-data.london/speakers/jessi… #CDL26 #SemanticWeb #InformationArchitecture #Ontology #DataIntegrity #KnowledgeGraphs
Connected Data tweet media
English
0
0
3
74
Connected Data
Connected Data@Connected_Data·
Introducing Create Context Graph If you've built an AI agent in the last year, you've probably learned a hard lesson: the agent isn't the hard part anymore. The context layer is. Pick any framework and you'll get a streaming chat loop and tool calls running in an afternoon. What you won't get is an answer to questions like: "Which patient did the agent recommend that treatment for last week, and why?" or "Why did we switch from JWT to OAuth2 in the auth service three months ago?" These aren't similarity questions. They're structure questions. Connected. Multi-hop. Provenance-aware. The kind of thing a flat chat log or a vector index can't really answer, because the answer lives in the relationships between things, not in the things themselves. That's the gap create-context-graph is built to close. One command generates a complete, working full-stack application: a FastAPI backend wired to Neo4j, a Next.js 15 frontend with streaming chat and an interactive graph visualization, a working AI agent in your framework of choice, and a domain ontology schema with entity types, relationships, and Cypher-powered tools. 22 domains out of the box. Healthcare, financial services, software engineering, gaming, conservation... and more. Data connectors for Linear and Claude Code let you import your own real data and turn it into a queryable graph. Vector stores give you recall. The graph gives you understanding. The agent isn't the hard part anymore. Memory is. Let's give the agents a graph to think in. By William Lyon @Neo4j medium.com/neo4j/introduc… #Neo4j #GraphMemory #AgentMemory #ContextGraph #LLMAgents #OpenSource #EmergingTech #AI #Agents -- Come meet Neo4j at #CDL26! Connected Data London 2026 | 11–12 November | Leonardo Royal Hotel London Tower Bridge 🎤 Share your work with the world's most passionate data community. The Call for Submissions is open. connected-data.london/2026-call-for-… 🎟 Tickets on sale now. Early bird discounts up to 30%. 2026.connected-data.london 📺 Sponsorship opportunities available. Contact info@connected-data.london for details. #KnowledgeGraph #GraphRAG #Ontology #Graph #AI #DataScience #GraphDB #SemTech
Connected Data tweet media
English
0
2
14
408
Connected Data retweetledi
The Year of the Graph
The Year of the Graph@TheYotg·
Process Tempo is the missing layer every graph stack needs Built to accelerate the design, development, and deployment of graph-driven applications, Process Tempo turns your ideas into production-ready solutions faster. Whether you’re building enterprise knowledge graphs or data intelligence platforms, Process Tempo provides the speed, structure, and flexibility needed to bring your connected ideas to life. #Graph #ApplicationDevelopment #IDE #LowCode #NoCode #SoftwareEngineering Process Tempo - helping bring Year of the Graph to life 👇 calendly.com/processtempo/y…
The Year of the Graph tweet media
English
3
2
7
626
Connected Data
Connected Data@Connected_Data·
What is an Ontologist or Knowledge Engineer, and how do you become one? Some job titles still make people pause. Ontologist. Knowledge Engineer. Semantic Data Engineer. What do these roles actually involve? At the core, this is about operating at the intersection of data, meaning, and systems -- designing, implementing, and operationalizing semantic models and knowledge graphs that power search, analytics, and AI-driven applications. Think of it as bridging the gap between data engineering, information architecture, and domain expertise. The goal: organizational knowledge that is machine-readable, consistent, and aligned across teams. Key responsibilities span a wide range: * Designing and maintaining ontologies and semantic data models * Facilitating knowledge extraction from domain experts and translating it into shared definitions * Building and maintaining knowledge graphs from structured and unstructured data * Integrating graph data with APIs, search systems, and downstream applications * Supporting semantic search, entity resolution, recommendation systems, and AI/ML use cases * Educating and guiding teams transitioning from relational to graph-based thinking On the technical side, the role typically calls for experience with RDF, OWL, SKOS or property graphs, query languages like SPARQL or Cypher, and programming in Python. Backgrounds in Information Science, Linguistics, Computer Science, or Philosophy are all viable paths in. The role itself comes in variants.  An Ontologist leans into conceptual modeling and knowledge representation.  A Knowledge Engineer balances modeling with implementation.  A Semantic Data Engineer focuses on pipelines, infrastructure, and deployment.  In practice, strong candidates often span all three. What does success look like?  Clear and consistent definitions across teams, knowledge graphs that serve real-world use cases, improved data quality and discoverability, and genuine collaboration between technical and non-technical stakeholders. By Asheigh Faith, Katariina Kari and Veronika Heimsbakk. youtu.be/227m9jGICps?ut… #Ontology #KnowledgeEngineering #SemanticWeb #KnowledgeGraphs #DataEngineering -- Connected Data London 2026 | 11–12 November | Leonardo Royal Hotel London Tower Bridge 🎤 Share your work with the world's most passionate data community. The Call for Submissions is open. connected-data.london/2026-call-for-… 🎟 Tickets on sale now. Early bird discounts up to 30%. 2026.connected-data.london/?utm_source=tw… 📺 Sponsorship opportunities available. Contact info@connected-data.london for details. #KnowledgeGraph #GraphRAG #Ontology #Graph #AI #DataScience #GraphDB #SemTech
YouTube video
YouTube
Connected Data tweet media
English
0
0
5
168
Connected Data retweetledi
The Year of the Graph
The Year of the Graph@TheYotg·
BrowseNet: Graph-Based Associative Memory for Contextual Information Retrieval Standard RAG has a structural blind spot: it retrieves isolated chunks without modeling how they relate to each other. That works fine for simple questions. It falls apart the moment reasoning needs to cross multiple documents. BrowseNet, accepted at ICLR 2026, addresses this head-on. Developed by researchers at IIT Madras and DevRev, it rethinks retrieval as a graph traversal problem. The core idea: transform a corpus into a Graph-of-Chunks, where nodes are document passages enriched with semantic embeddings, and edges connect passages that share named entities or synonymous terms. When a multi-hop query arrives, BrowseNet decomposes it into a directed acyclic graph of single-hop subqueries, then walks the Graph-of-Chunks to surface the reasoning path the query actually needs. This two-track approach, combining lexical graph structure with semantic similarity, outperforms both dense retrieval methods and graph-augmented RAG pipelines including HippoRAG-2 on HotpotQA, 2WikiMQA, and MuSiQue benchmarks. What makes it practically compelling is the cost story. BrowseNet achieves this with roughly 33x lower LLM cost than the previous SOTA, and only a marginal latency trade-off of under half a second per query. The entire pipeline requires just one LLM call at retrieval time, guided by pre-generated subqueries, rather than repeated back-and-forth inference. The graph construction uses GLiNER for named entity recognition and ColBERTv2 for synonym matching, with no generative LLM needed during offline indexing. The code and datasets are fully open-sourced. github.com/bisect-group/B… #KnowledgeGraph #RAG #MultiHopQA #GraphML #LLM #OpenSource -- 📩 The Year of the Graph Spring 2026 newsletter issue is out! Beyond Context Graphs: How Ontology, Semantics, and Knowledge Graphs Define Context 👇 yearofthegraph.xyz/newsletter/202… All things #KnowledgeGraph, #GraphDB, Graph #Analytics / #DataScience / #AI and #SemTech. Subscribe and follow to be in the know. Reach out if you'd like to be featured
The Year of the Graph tweet media
English
3
6
49
2.9K
Connected Data
Connected Data@Connected_Data·
Full Stack Graph Machine Learning In this course, we take the skills you've developed in working with data tables and DataFrames and extend them to cover graphs, networks, knowledge graphs, property graphs and graph databases. We work with different types of graphs from multiple domains. This includes natural networks like social networks, collaboration networks or communications networks as well as structural networks like the plan of a Python program or the 3D mesh of a model of a 3-dimensional scene. Starting from tabular DataFrames and traditional machine learning - we build a theoretical understanding of modern data science and machine learning methods for graph-structured datasets and the practical skills that enable students to implement them. Students who graduate from the course can add graph analytics and graph machine learning to their daily workflows for small and large datasets using the most popular tools. We introduce common Python tools for graph analytics and graph machine learning, as well as the popular graph databases Neo4j and KuzuDB. We will focus on property graphs and will compare them with RDF / triple stores using SPARQL. We will cover the core methods from social network analysis and network science that will guide your informed-intuition in doing graph machine learning. We build a knowledge graph using natural language processing (NLP), combine its duplicate nodes using deep networks for entity resolution and mine the resulting graph for patterns. Finally, we will build a full-stack graph ML application that shows network visualizations of explainable GNNs for chemical engineering. Key Topics Students will go from a working knowledge of data science and machine learning with data tables to a working knowledge of data science and machine learning for graphs to build real-world applications. It is not enough to teach students graph neural networks (GNNs) - they need to work their way up from graph theory to GNNs using common Python tools in design patterns based on real-world use cases. Via a streamlined Docker experience, students will learn to: * Describe social networks using social network analysis (SNA) * Describe and analyze any network using network science * Find significant patterns in real-world networks * Build predictive systems using traditional graph ML * Replace manual feature engineering with graph embeddings * Solve machine learning problems using graph neural networks * Visualize networks during interactive analysis Target Audience Data Scientists Data Engineers Machine Learning Engineers Software Engineers Data Analysts that know Python Managers of the above - possibly, might struggle at the code level but would learn a lot Goals Students will graduate from the course and be able to work with graphs as they now work with tables and DataFrames. They will understand the fundamentals of network science and graph machine learning and how they relate to modern graph learning methods. Session Outline Graph theory - what is a graph? Examples of networks? Heterogeneous networks can model anything! Social network analysis (SNA) - social science Network science - techniques that span fields and applications Graph machine learning tasks - node, link, sub-graph, graph Graph features and kernels - feature engineering for networks Graph Neural Networks (GNNs) - neural networks shaped like graphs that learn directly from the properties and structure of the data Network visualization - data viz for small and large scale networks Format This is a hands-on class, where after a lecture in each of 2, 2-hour sessions, we work through one or more Jupyter notebooks. The notebooks are available here:  Network Science Notebook - the fundamentals of network science with networkx and littleballoffur. This is a really neat one to cover. Graph Machine Learning Notebook - from traditional ML to embeddings to GNNs. I hope to give students the intuition behind modern methods by doing it manually first. Skill Level Intermediate youtube.com/watch?v=jh6mBC… -- Russell Jurney. Graph ML / Viz / LLM Startup CTO, Graphlet AI Applied AI researcher and startup CTO working at the intersection of large graphs and large language models (LLMs). Consultant at Graphlet AI where he advises companies at the intersection of enterprise knowledge graphs and generative AI. -- Welcome to Connected Data London's #ThrowbackThursday Every Thursday at 3pm GMT, we are releasing gems from our vault on #YouTube Tune in and learn from leaders and innovators; subscribe to our channel and watch premieres as they are released!  #knowledgegraph #graphdatabase #graph #AI #datascience #analytics #semtech #ontology
YouTube video
YouTube
English
0
2
5
661
Connected Data
Connected Data@Connected_Data·
Building Agentic GraphRAG Systems From knowledge graphs and ontologies to a unified memory as an MCP server for your AI agent. GraphRAG isn't a retrieval algorithm. It's a data modeling problem. This insight came to Paul Iusztin after he gave two talks on the same topic in a single month, at O'Reilly's Context Engineering Event and at a Maven course on LLM inference at scale. The question flood that followed made one thing clear: engineers want GraphRAG but don't know how to build it. So why bother with GraphRAG over plain RAG? Three reasons: Context rot. As the context window fills, signal-to-noise collapses. Quality, cost, and latency all suffer. Data fragmentation. In the agent era, data lives in silos: documents, notes, research, emails, messages. There's no single clean database to query. Memory structure. An agent's unified memory naturally maps to a knowledge graph. People have preferences, histories, relationships, and time-anchored events. A KG tracks all of it. The architecture he suggests is built around an ontology-first design. Skip the ontology, and the LLM invents its own entity/relationship labels freely.  Five documents produced 17 node types and 34 relationship types in one experiment, including "part_of," "Part Of," and "part of" as three separate types. Unusable noise. With a constrained ontology, the LLM extracts only what's defined. That's when cheaper extractor models become viable. The full system covers: Two sub-ontologies: Document and Person Three extraction modes: structured, semi-structured, unstructured A five-component pipeline from raw sources to a queryable KG Hybrid retrieval with Reciprocal Rank Fusion (RRF) The whole engine exposed as a unified memory layer via an MCP server For 2-3 hop traversals, Postgres or MongoDB cover the job. Reach for a graph database when your traversal depth and scale demand it. The crown jewel use case? Personal assistants.  With a knowledge graph, you can build a memory that actually knows what a person likes, what they've done, and what they still need to do, all properly anchored in time. Building Agentic GraphRAG Systems decodingai.com/p/agentic-grap… #GraphRAG #DataEngineering #SoftwareEngineering #AIAgents #RAG #LLM #EmergingTech -- Connected Data London 2026 | 11–12 November | Leonardo Royal Hotel London Tower Bridge 🎤 Share your work with the world's most passionate data community. The Call for Submissions is open. connected-data.london/2026-call-for-… 🎟 Tickets on sale now. Early bird discounts up to 30%. [2026.connected-data.london/?utm_source=tw…) 📺 Sponsorship opportunities available. Contact info@connected-data.london for details. #KnowledgeGraph #GraphRAG #Ontology #Graph #AI #DataScience #GraphDB #SemTech
Connected Data tweet media
English
2
6
31
1.6K