GraphNews

Agentic GraphRAG architecture
Agentic GraphRAG architecture
I’ve spent the past year building GraphRAG systems from scratch. Here’s the architecture I keep coming back to (steal it)… MongoDB is the unified memory. Voyage AI by MongoDB for embeddings. Agent on top. Here's how the end-to-end flow looks: 𝟭–𝟮 / 𝗜𝗻𝗴𝗲𝘀𝘁𝗶𝗼𝗻 → 𝗗𝗮𝘁𝗮 𝘄𝗮𝗿𝗲𝗵𝗼𝘂𝘀𝗲 Pull data from URIs, notes, emails, docs. Normalize into a single document schema and store in MongoDB. Example: emails, research notes, and meeting transcripts become raw_documents. This is the durable ingestion layer. 𝟯–𝟳 / 𝗠𝗲𝗺𝗼𝗿𝘆 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 Each document flows through: • Clean text + metadata • Chunk (optional) • Graph extraction (entities + relationships) • Normalization (merge duplicates like “Abi” vs “Abi Aryan”) • Embeddings with Voyage AI Output: knowledge graph objects with triplets, vectors, and metadata. 𝟴 / 𝗨𝗻𝗶𝗳𝗶𝗲𝗱 𝗺𝗲𝗺𝗼𝗿𝘆 𝗶𝗻 MongoDB Materialize into a knowledge_graph collection with entities and relationships as documents. Indexes: • Text index → keyword recall • Vector index → semantic recall • Graph links → multi-hop traversal Documents, vectors, and graph memory in one place. 𝟵 / 𝗠𝗖𝗣 𝘀𝗲𝗿𝘃𝗲𝗿 Expose GraphRAG through tools: • NL query → compact high-level retrieval • Deep search → progressive graph expansion • Ingest → URLs, files, conversations 𝟭𝟬 / 𝗛𝗮𝗿𝗻𝗲𝘀𝘀 Claude Code orchestrates reasoning and tool usage. It decides when to retrieve vs write memory. 𝟭𝟭 / 𝗦𝗸𝗶𝗹𝗹𝘀 Skills define interaction with GraphRAG: • Assistant-memory → harness to MCP bridge • Assistant-learn → push insights back to memory 𝟭𝟮–𝟭𝟯 / 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 The agent selects tools dynamically. Semantic + text search retrieves entry nodes. Graph traversal (2–3 hops) expands context. Example: "Create GraphRAG talk for O’Reilly" → Retrieve GraphRAG + O’Reilly → Expand to past talks and preferences → Return structured context MongoDB stores evolving graph memory. Voyage AI by MongoDB powers semantic recall. Agents turn retrieval into reasoning. One unified memory layer. One ingestion and retrieval pipeline. One agent loop. P.S. Are you building GraphRAG with a unified memory layer, or still splitting vectors, graphs, and documents across multiple databases? | 17 comments on LinkedIn
·linkedin.com·
Agentic GraphRAG architecture
LOM: Unifying Ontology Construction and Semantic Alignment for Deterministic Enterprise Reasoning at Scale | The AI Journal
LOM: Unifying Ontology Construction and Semantic Alignment for Deterministic Enterprise Reasoning at Scale | The AI Journal
BEIJING, April 10, 2026 /PRNewswire/ -- As the way of managing enterprise data assets evolves from simple accumulation to value extraction, the role of AI has shifted accordingly: it is no longer limi
·aijourn.com·
LOM: Unifying Ontology Construction and Semantic Alignment for Deterministic Enterprise Reasoning at Scale | The AI Journal
The Power of Model Context Protocol: Using Natural Language to Query GraphDB
The Power of Model Context Protocol: Using Natural Language to Query GraphDB

Natural Language to SPARQL: GraphDB Ships MCP Integration

Ontotext's GraphDB has shipped an MCP server that allows clients to query RDF repositories using natural language rather than SPARQL syntax. The server uses Server-Sent Events for streaming communication, with a gateway layer managing connections to the secured GraphDB instance. Under the hood, the system translates natural language intent into SPARQL operations, performs graph traversal against the ontology, and returns structured, relationship-rich results.

The accessibility argument is straightforward and real: SPARQL has always had a steep learning curve that limits who can directly interrogate a knowledge graph. Business analysts, domain experts, and researchers who understand the data conceptually but not the query syntax have always needed a technical intermediary. The MCP-NL interface removes that barrier.

The practical demos are compelling. A query like "find software engineers at ACME" doesn't just return direct title matches — the system traverses related roles, career progressions, department structures, and biographical data encoded in the ontology, returning contextually complete results that a keyword search would miss entirely.

But the accessibility gain comes with a question that semantic practitioners should take seriously: query auditability. When a human writes a SPARQL query, the intent is explicit and reviewable. When a language model generates the SPARQL internally, the translation is opaque. For governance-sensitive deployments — regulatory compliance, legal discovery, medical records — understanding exactly what question was asked of the graph, not just what answer came back, may be a requirement.

The right architecture likely exposes the generated SPARQL to the user alongside the results, making the translation a transparency feature rather than a black box. Several implementations are moving in this direction. The capability is here; the governance patterns are still being worked out.

·graphwise.ai·
The Power of Model Context Protocol: Using Natural Language to Query GraphDB
The Rise of Neuro-Symbolic AI: A Spotlight in Gartner’s 2025 AI Hype Cycle - AllegroGraph
The Rise of Neuro-Symbolic AI: A Spotlight in Gartner’s 2025 AI Hype Cycle - AllegroGraph

Gartner's 2025 AI Hype Cycle placed generative AI in the Trough of Disillusionment — the phase that follows peak inflated expectations, when the technology has to prove it delivers durable value rather than impressive demos. The beneficiary of that correction, according to multiple analysts and platform vendors, is neuro-symbolic AI.

The neuro-symbolic paradigm combines the learning strengths of neural networks with the reasoning strengths of symbolic systems — logic rules, ontologies, SHACL constraints, and knowledge graphs. Where purely neural approaches struggle with interpretability, consistency under distribution shift, and reliable behavior in high-stakes domains, neuro-symbolic systems offer explainability by construction: you can trace a conclusion back through the symbolic layer to the rules and facts that produced it.

For regulated industries — healthcare, finance, legal, government — this is not an abstract benefit. It is often a compliance requirement. A system that can't explain why it reached a conclusion isn't deployable in contexts where decisions affect people's lives or finances. The neuro-symbolic architecture answers that requirement directly.

AllegroGraph's recent analysis makes the infrastructure argument explicit: knowledge graphs, ontologies, and SPARQL-based inferencing provide the symbolic layer that neuro-symbolic systems require. The technical pattern is clear — neural components handle perception, language understanding, and pattern recognition; symbolic components handle constraint enforcement, reasoning, and auditability. The two don't compete; they compose.

The strategic implication for semantic technology practitioners is significant. The tools and formalisms that have been developed over two decades of semantic web work — RDF, OWL, SHACL, SPARQL — are not legacy artifacts. They are the symbolic substrate that makes trustworthy AI architectures possible at enterprise scale.

·allegrograph.com·
The Rise of Neuro-Symbolic AI: A Spotlight in Gartner’s 2025 AI Hype Cycle - AllegroGraph
stack ranking of REVENUE in the DBMS market
stack ranking of REVENUE in the DBMS market
April means many things to many people. For some, it’s the first signs of spring after a long winter. For others (today in particular for my US-based readers) its tax time. On this particular April Wednesday (Wednesday is, after all, spaghetti day, as Tony Baer likes to remind me) it’s spaghetti time — at least on Gartner’s data management team! I'm pleased to be able to share the 2026 DBMS Spaghetti chart (covering data from 2025). Some things to keep in mind: - This is only a small part of our market analysis. The underlying data that this is based on will publish shortly, including in-depth analysyis of the market, and of course, forecasts. - This is a stack ranking of REVENUE in the DBMS market. Pure-play open source products are not included outside of commercialization vendors. If you are looking for a popularity contest, there are many other sources for that. - This year's "Churn Index" sits at 42, down slightly from last year's 49. As a reminder, the Churn Index is is calculated as a percentage of vendors that either gained or lost market position in the stack ranking. We further provide a positive and negative churn index for those who are interested.  - The market remains settled at the top. There has been no churn in the top 9 vendors for the last 3 years. With the exception of Tencent surpassing Huawei, there has been no churn for the top 17 vendors for the past 2 years. I have my own predictions on what might happen next, but you’ll have to talk to me on inquiry for the details. - Datastax has been acquired by IBM, and their revenue will henceforth be counted under IBM’s. - ServiceNow is now included as a named vendor in 2025. - Rocket Software has acquired Vertica, but that will not be reflected until next year’s analysis. That said, we no longer refer to “OpenText (Vertica)” in this year’s report, instead going with the more accurate “OpenText”. - As always, pure-play cloud vendors are in light blue. if you are a Gartner client and would like to discuss the DBMS market dynamics in more detail, please feel free to set up an inquiry. And keep an eye out for the detailed analysis which will publish in coming weeks. Special thanks to Robin Schumacher, Ph.D., Harshita Chibber and the quant team at Gartner for sourcing the numbers.
stack ranking of REVENUE in the DBMS market
·linkedin.com·
stack ranking of REVENUE in the DBMS market
More ontology beginner mistakes
More ontology beginner mistakes
Fantastic list of Beginning Mistakes. Others I've found over the years: * #Namespaces. Getting hung up too early on IRIs and namespaces - until you have an understand of the problem domain ex:_ works JUST fine. * #Inheritance. Focusing on hierarchies and Inheritance before knowing the concrete classes you're working with. Inheritance is largely an optimisation step that should be done later. * #Not_Building_Examples. Start with examples; they often clarify what you're actually modelling far better than Protege will do. Protege is a great tool, but keep it in the toolbox until you actually need it. * #Upper_Ontologies. You don't NEED them. An upper ontology is a lot like a code framework; it's great for building interoperating systems and establishing interface abstractions, but until you know the general structure of your data, it is a distraction. * #Things_Change. An ontology that does not account for change is an inventory, nothing more. Spend time understanding events, especially. * #Turtle_Is_Just_An_Odd_Form_Of_JSON. Nope. Turtle is a way of describing a graph. Full stop. You can use JSON_LD as another way of describing that graph, but it's still RDF, and it follows the rules of RDF. If you are going to work with ontologies, you should learn Turtle, because it works on graph assumptions that exceed JSON. * #Taxonomies_Matter. Taxonomies declare concepts, though they do not define them - that's what the ontological schema does, whether through OWL or through SHACL. * #Established_Ontologies_Are_Useful_Not_Mandatory. There are many ontologies out there. Some are highly useful, some have limited utility, some are crap. Solve for your needs first, and if they happen to align with existing ontologies, great, but don't assume they HAVE to. There are other lessons learned. I'd love to hear from others in that respect.
·linkedin.com·
More ontology beginner mistakes
Reactome-ontology, a project aimed at representing Reactome knowledge in a more structured, reusable, and ontology-oriented way
Reactome-ontology, a project aimed at representing Reactome knowledge in a more structured, reusable, and ontology-oriented way
reactome-ontology, a project aimed at representing Reactome knowledge in a more structured, reusable, and ontology-oriented way
·linkedin.com·
Reactome-ontology, a project aimed at representing Reactome knowledge in a more structured, reusable, and ontology-oriented way
Why a Knowledge Graph Reduces System Load for Join‑Heavy Queries
Why a Knowledge Graph Reduces System Load for Join‑Heavy Queries
Why a Knowledge Graph Reduces System Load for Join‑Heavy Queries In a graph (or RDF triple store), every node stores direct pointers to its neighbors. Following a relationship is a pointer lookup — not a search, not a join, and not an index scan.  How Relational Databases Work  To answer a question that spans multiple tables (for example, `employees` → `departments`):  1. The database scans or indexes the `employees` table.   2. For each matching row, it looks up the corresponding entry in `departments`.   3. It materializes intermediate join results—often spilling to disk when memory runs out.  Every `JOIN` adds a new multiplication of complexity.   With five joins, even a well‑indexed system may evaluate **billions of combinations**, hitting CPU, I/O, and RAM hard.  How Knowledge Graphs Work  In a graph, relationships are stored as edges that directly link nodes.   To find “which department does employee E work in?” the database:  1. Follows a pointer from the employee node to its `works_in` edge.   2. Follows that edge to the department node.  What This Means for Your Architecture  - Smaller hardware footprint – Run multi‑hop analytics on less hardware.   - Lower memory pressure – No temporary join tables clogging cache.   - Predictable performance – Execution time tied to path length, not data volume.   - More concurrent queries – Lightweight traversals free up CPU for other workloads.  The Bottom Line  A knowledge graph replaces costly, exponential‑time joins with lightweight, pointer‑level traversals.  For any problem involving connected data — customers to products, components to failures, molecules to diseases — graph architecture cuts system load by orders of magnitude and keeps performance stable as your data grows.  **That’s why knowledge graphs are the natural home for relationship‑rich intelligence.**  👉 **Follow me for Knowledge Management and Neuro‑Symbolic AI daily nuggets. 👉 Join my group for more insights and community discussions[Join the Group](https://lnkd.in/d9Z8-RQd)
·linkedin.com·
Why a Knowledge Graph Reduces System Load for Join‑Heavy Queries
Time is Not a Label: Continuous Phase Rotation for Temporal Knowledge Graphs and Agentic Memory | Cool Papers - Immersive Paper Discovery
Time is Not a Label: Continuous Phase Rotation for Temporal Knowledge Graphs and Agentic Memory | Cool Papers - Immersive Paper Discovery
Structured memory representations such as knowledge graphs are central to autonomous agents and other long-lived systems. However, most existing approaches model time as discrete metadata, either sorting by recency (burying old-yet-permanent knowledge), simply overwriting outdated facts, or requiring an expensive LLM call at every ingestion step, leaving them unable to distinguish persistent facts from evolving ones. To address this, we introduce RoMem, a drop-in temporal knowledge graph module for structured memory systems, applicable to agentic memory and beyond. A pretrained Semantic Speed Gate maps each relation's text embedding to a volatility score, learning from data that evolving relations (e.g., "president of") should rotate fast while persistent ones (e.g., "born in") should remain stable. Combined with continuous phase rotation, this enables geometric shadowing: obsolete facts are rotated out of phase in complex vector space, so temporally correct facts naturally outrank contradictions without deletion. On temporal knowledge graph completion, RoMem achieves state-of-the-art results on ICEWS05-15 (72.6 MRR). Applied to agentic memory, it delivers 2-3x MRR and answer accuracy on temporal reasoning (MultiTQ), dominates hybrid benchmark (LoCoMo), preserves static memory with zero degradation (DMR-MSC), and generalises zero-shot to unseen financial domains (FinTMMBench).
·papers.cool·
Time is Not a Label: Continuous Phase Rotation for Temporal Knowledge Graphs and Agentic Memory | Cool Papers - Immersive Paper Discovery
An ontological approach to foster the convergence, interoperability and operationalization of frameworks for Trustworthy AI
An ontological approach to foster the convergence, interoperability and operationalization of frameworks for Trustworthy AI
AI systems are consistently evolving in terms of both capability and autonomy with an holistic social impact. In this context of proliferation and fast technological evolution, the scientific...
An ontological approach to foster the convergence, interoperability and operationalization of frameworks for Trustworthy AI
·arxiv.org·
An ontological approach to foster the convergence, interoperability and operationalization of frameworks for Trustworthy AI
KG4CUT: an ontology to facilitate cutting tool selection and interoperability
KG4CUT: an ontology to facilitate cutting tool selection and interoperability
Selecting cutting tools for milling is a critical and complex task that directly affects product quality, cost, and operational efficiency. The growing diversity of tools and vendor-specific catalogues makes this process especially challenging, particularly for less experienced operators. In this paper, we present KG4CUT, an application ontology aligned with W3C Semantic Web standards and FAIR principles, designed to standardize and integrate cutting tool information across providers. To demonstrate its practical utility, we populated a knowledge graph using an automated pipeline that extracts structured data from real-world PDF catalogues. This graph serves as both a proof of concept and a functional basis for intelligent tool recommendation and cutting parameter retrieval, based on material properties, operation types, and geometric constraints. Evaluation with domain experts showed improved retrieval efficiency and reduced selection errors. KG4CUT thus supports the digitalization of machining knowledge and enables faster, more accurate process planning in industrial settings.
·link.springer.com·
KG4CUT: an ontology to facilitate cutting tool selection and interoperability