Your data pipelines are running. Your dashboards are live. But if AI isn't embedded directly into your InterSystems environment, you're leaving the most valuable insights sitting idle—processed too slowly to act on.
Real-time analytics isn't just about speed anymore. It's about intelligence at the point of action. InterSystems IRIS, the company's flagship data platform, is purpose-built to close the gap between raw operational data and AI-driven decisions—without the latency tax of moving data to an external ML system.
In this guide, we break down the proven best practices for integrating AI with InterSystems for real-time analytics, covering everything from architecture patterns to model deployment strategies that actually hold up in production.
Quick Overview
|
Aspect |
Details |
|
Platform |
InterSystems IRIS Data Platform |
|
Core Capability |
Embedded AI/ML with real-time transactional + analytics workloads |
|
Key Use Cases |
Healthcare analytics, financial fraud detection, supply chain optimization |
|
AI Integration Methods |
IntegratedML, Python Gateway, PMML, REST API endpoints |
|
Target Users |
Data engineers, AI architects, enterprise developers |
Why InterSystems IRIS for AI-Driven Real-Time Analytics
Most enterprise analytics architectures suffer from a common architectural flaw: the AI layer lives outside the data layer. Data has to travel—from operational databases to data lakes, through transformation pipelines, into ML platforms—before a prediction can be made. By then, the moment has often passed.
InterSystems IRIS takes a fundamentally different approach. It combines transactional processing (OLTP), analytics (OLAP), and AI/ML capabilities in a single, unified platform. This convergence isn't just a convenience—it's a performance breakthrough. According to InterSystems, IRIS can ingest and analyze millions of events per second while simultaneously running machine learning models against that live data.
The result: AI predictions generated in milliseconds, not minutes. For industries where the cost of latency is measured in lives (healthcare) or dollars (financial services), this architecture is a game-changer.
Best Practice #1: Use IntegratedML for In-Database Model Training
IntegratedML is InterSystems' declarative machine learning engine built directly into IRIS SQL. Rather than extracting data to an external Python or R environment, train and deploy models with SQL-style commands inside the database itself.
This approach eliminates the data movement overhead that plagues traditional ML pipelines. A model trained on 10 million patient records doesn't need to be serialized, transferred, and deserialized—it runs where the data lives.
How to Implement
- Create a model: CREATE MODEL PatientRisk PREDICTING (RiskScore) FROM PatientData
- Train with a single command: TRAIN MODEL PatientRisk
- Generate predictions inline: SELECT PREDICT(PatientRisk) AS Risk FROM PatientData WHERE PatientID = 12345
Best practice: Use IntegratedML for structured tabular data where speed-to-deployment matters more than custom model architectures. For deep learning or custom neural networks, leverage Python Gateway instead.
Best Practice #2: Leverage Python Gateway for Advanced ML Frameworks
IntegratedML handles a wide range of classification and regression problems, but enterprise AI often demands more—custom neural networks, NLP pipelines, reinforcement learning, or computer vision models built in TensorFlow, PyTorch, or scikit-learn.
InterSystems Python Gateway solves this by embedding Python execution natively within IRIS. Instead of building a separate microservice to run your Python models, you call them directly from ObjectScript or via SQL stored procedures. The data never leaves the IRIS environment.
Key Implementation Tips
- Install Python Gateway as a IRIS add-on and configure your Python environment path in the IRIS Management Portal
- Use the IRISNative API to pass IRIS globals directly into Python objects—eliminating serialization overhead
- Cache frequently used model objects in memory using IRIS's built-in caching layer to avoid re-loading on every prediction request
- For high-throughput scenarios, deploy models as persistent Python processes rather than loading them per-request
For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like
For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like Denebrix AI provide structured implementation support that spans model development, IRIS integration, and real-time pipeline validation.
Best Practice #3: Architect for Low-Latency with Adaptive Analytics
Real-time analytics requires more than fast data retrieval—it demands an architecture that adapts to changing data distributions. InterSystems Adaptive Analytics (powered by AtScale) bridges IRIS with BI tools like Tableau, Power BI, and Looker, providing a semantic layer that enables live, in-memory analytical queries without pre-aggregating data into cubes.
The key architectural principle here is pushdown optimization: analytics queries run as close to the data as possible, inside IRIS, rather than pulling raw rows into an external analytics engine. This can reduce query times from minutes to seconds for enterprise-scale datasets.
Architecture Recommendations
- Define business metrics and KPIs in the Adaptive Analytics semantic layer—not in your BI tool—to ensure consistency across dashboards
- Use IRIS columnar storage for analytics-heavy tables while keeping transactional tables in row-based storage
- Implement multi-model data architecture: relational tables for transactions, globals for hierarchical data, and vector tables for similarity search in AI applications
- Enable streaming analytics via IRIS's built-in message broker to process event streams without leaving the platform
Best Practice #4: Deploy Models with PMML for Portability
Not every AI model will be born inside IRIS. Data scientists often build models in external environments—SageMaker, Azure ML, Google Vertex AI—and need to deploy them into operational systems. InterSystems supports PMML (Predictive Model Markup Language), an open standard for representing trained models.
Importing a PMML model into IRIS means predictions can be generated by the platform without maintaining a live connection to the external ML environment. This is particularly valuable in regulated industries where data residency requirements prevent sending records to cloud inference endpoints.
PMML Deployment Workflow
- Export trained model as PMML XML from your ML platform of choice
- Import into IRIS using the DeepSee PMML engine or the InterSystems PMML Utils library
- Wrap the PMML inference call in an IRIS stored procedure for easy integration with existing application code
- Monitor prediction drift by logging PMML outputs alongside actuals in an IRIS analytics table
Best Practice #5: Build GenAI Applications with Vector Search
Generative AI is redefining what's possible with enterprise data. InterSystems IRIS now supports vector embeddings natively, enabling semantic search, retrieval-augmented generation (RAG), and similarity-based recommendations directly within the platform—no external vector database required.
This is significant for real-time analytics: imagine a clinical decision support system that retrieves semantically similar patient cases at the moment a physician places an order, or a fraud detection engine that finds transactions matching known fraud patterns using embedding similarity rather than rigid rule matching.
Implementation Blueprint
- Generate embeddings using models like OpenAI text-embedding-3-small or open-source alternatives (BAAI/bge, sentence-transformers)
- Store embeddings in IRIS vector-type columns alongside your structured data
- Use VECTOR_COSINE() or VECTOR_DOT_PRODUCT() SQL functions to run similarity queries inline with your analytics
- For RAG applications, combine IRIS vector search with an LLM API call, passing retrieved context as part of the prompt
AI Integration Methods: Comparison at a Glance
|
Method |
Best For |
Latency |
Data Movement |
|
IntegratedML |
Tabular classification/regression |
Very Low |
None |
|
Python Gateway |
Custom ML frameworks |
Low |
None |
|
PMML Import |
Pre-trained external models |
Low |
None |
|
REST API (external) |
Large LLMs, cloud models |
Medium-High |
Yes |
|
Vector Search |
Semantic/similarity queries |
Very Low |
None |
Best Practice #6: Monitor, Retrain, and Govern AI Models Continuously
Production AI models degrade. Data distributions shift, business rules change, and models that were 95% accurate at deployment can slip to 70% within months. Real-time analytics environments are especially vulnerable because they're ingesting live, unpredictable data.
InterSystems IRIS provides the infrastructure for continuous model monitoring through its analytics and auditing capabilities. Build feedback loops that log predictions, compare them against actuals, and trigger retraining workflows when accuracy falls below defined thresholds.
Governance Checklist
- Log every model prediction with input features, output score, and timestamp into an IRIS audit table
- Set up automated drift detection using statistical tests (KS-test, PSI) on incoming feature distributions
- Define retraining triggers: schedule-based (weekly), performance-based (accuracy < threshold), or event-based (data schema change)
- Maintain model versioning in IRIS globals for rollback capability
- Implement role-based access controls (RBAC) on model endpoints to ensure only authorized services can invoke AI predictions
Frequently Asked Questions
What is InterSystems IRIS IntegratedML?
IntegratedML is a declarative machine learning engine embedded directly in InterSystems IRIS. It lets developers train, validate, and deploy predictive models using SQL-like syntax, without moving data to an external ML platform. It's designed to reduce the complexity of bringing AI into production for developers who aren't data scientists.
How does InterSystems IRIS handle real-time AI inference?
IRIS runs AI inference in-process with the operational data, eliminating network round-trips to external ML services. Through IntegratedML, Python Gateway, and native PMML support, predictions are generated as part of SQL queries or application transactions—delivering millisecond latency at enterprise scale.
Can I use Python and TensorFlow with InterSystems IRIS?
Yes. The Python Gateway add-on enables direct Python execution within the IRIS environment. You can use any Python ML library—TensorFlow, PyTorch, scikit-learn, HuggingFace—and call models from ObjectScript or SQL. This allows teams to build models in familiar Python environments and deploy them without a separate inference microservice.
What are the limitations of IntegratedML?
IntegratedML is optimized for structured tabular data and standard ML tasks (classification, regression). It doesn't support custom neural network architectures, unstructured data like images or audio, or advanced techniques such as reinforcement learning. For these use cases, Python Gateway or external model integration via REST or PMML is recommended.
How does InterSystems IRIS support Generative AI applications?
IRIS supports GenAI through native vector storage and similarity search functions, enabling retrieval-augmented generation (RAG) workflows without a separate vector database. Teams can store embeddings alongside structured data, run semantic search queries in SQL, and combine results with external LLM API calls for applications like intelligent document retrieval or clinical decision support.
Is InterSystems IRIS suitable for healthcare AI analytics?
Yes, IRIS is widely adopted in healthcare, with purpose-built products like HealthShare and TrakCare built on the platform. It supports HL7 FHIR natively, provides HIPAA-compliant data handling, and integrates AI capabilities directly with clinical data—making it well-suited for predictive analytics in clinical and operational healthcare settings.
Final Thoughts
Integrating AI with InterSystems for real-time analytics isn't a single decision—it's a series of architectural choices that compound over time. Start with IntegratedML for fast time-to-value on structured prediction tasks. Layer in Python Gateway when your models outgrow declarative SQL. Embrace vector search as GenAI reshapes what enterprise applications can do.
The organizations winning with real-time AI aren't just faster—they're building systems where intelligence is inseparable from operations. InterSystems IRIS gives you the platform to do exactly that. The practices in this guide give you the roadmap.
.png)
.png)
.png)
.png)
.png)
.png)
.png)
