发布新帖

查找

文章
· 10 hr 前 阅读大约需 9 分钟

Best Practices for Integrating AI with InterSystems for Real-Time Analytics

Your data pipelines are running. Your dashboards are live. But if AI isn't embedded directly into your InterSystems environment, you're leaving the most valuable insights sitting idle—processed too slowly to act on.

Real-time analytics isn't just about speed anymore. It's about intelligence at the point of action. InterSystems IRIS, the company's flagship data platform, is purpose-built to close the gap between raw operational data and AI-driven decisions—without the latency tax of moving data to an external ML system.

 

In this guide, we break down the proven best practices for integrating AI with InterSystems for real-time analytics, covering everything from architecture patterns to model deployment strategies that actually hold up in production.

Quick Overview

Aspect

Details

Platform

InterSystems IRIS Data Platform

Core Capability

Embedded AI/ML with real-time transactional + analytics workloads

Key Use Cases

Healthcare analytics, financial fraud detection, supply chain optimization

AI Integration Methods

IntegratedML, Python Gateway, PMML, REST API endpoints

Target Users

Data engineers, AI architects, enterprise developers

 

Why InterSystems IRIS for AI-Driven Real-Time Analytics

Most enterprise analytics architectures suffer from a common architectural flaw: the AI layer lives outside the data layer. Data has to travel—from operational databases to data lakes, through transformation pipelines, into ML platforms—before a prediction can be made. By then, the moment has often passed.

InterSystems IRIS takes a fundamentally different approach. It combines transactional processing (OLTP), analytics (OLAP), and AI/ML capabilities in a single, unified platform. This convergence isn't just a convenience—it's a performance breakthrough. According to InterSystems, IRIS can ingest and analyze millions of events per second while simultaneously running machine learning models against that live data.

The result: AI predictions generated in milliseconds, not minutes. For industries where the cost of latency is measured in lives (healthcare) or dollars (financial services), this architecture is a game-changer.

Best Practice #1: Use IntegratedML for In-Database Model Training

IntegratedML is InterSystems' declarative machine learning engine built directly into IRIS SQL. Rather than extracting data to an external Python or R environment, train and deploy models with SQL-style commands inside the database itself.

This approach eliminates the data movement overhead that plagues traditional ML pipelines. A model trained on 10 million patient records doesn't need to be serialized, transferred, and deserialized—it runs where the data lives.

How to Implement

  • Create a model: CREATE MODEL PatientRisk PREDICTING (RiskScore) FROM PatientData
  • Train with a single command: TRAIN MODEL PatientRisk
  • Generate predictions inline: SELECT PREDICT(PatientRisk) AS Risk FROM PatientData WHERE PatientID = 12345

Best practice: Use IntegratedML for structured tabular data where speed-to-deployment matters more than custom model architectures. For deep learning or custom neural networks, leverage Python Gateway instead.

Best Practice #2: Leverage Python Gateway for Advanced ML Frameworks

IntegratedML handles a wide range of classification and regression problems, but enterprise AI often demands more—custom neural networks, NLP pipelines, reinforcement learning, or computer vision models built in TensorFlow, PyTorch, or scikit-learn.

InterSystems Python Gateway solves this by embedding Python execution natively within IRIS. Instead of building a separate microservice to run your Python models, you call them directly from ObjectScript or via SQL stored procedures. The data never leaves the IRIS environment.

Key Implementation Tips

  • Install Python Gateway as a IRIS add-on and configure your Python environment path in the IRIS Management Portal
  • Use the IRISNative API to pass IRIS globals directly into Python objects—eliminating serialization overhead
  • Cache frequently used model objects in memory using IRIS's built-in caching layer to avoid re-loading on every prediction request
  • For high-throughput scenarios, deploy models as persistent Python processes rather than loading them per-request

For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like

For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like Denebrix AI provide structured implementation support that spans model development, IRIS integration, and real-time pipeline validation.

Best Practice #3: Architect for Low-Latency with Adaptive Analytics

Real-time analytics requires more than fast data retrieval—it demands an architecture that adapts to changing data distributions. InterSystems Adaptive Analytics (powered by AtScale) bridges IRIS with BI tools like Tableau, Power BI, and Looker, providing a semantic layer that enables live, in-memory analytical queries without pre-aggregating data into cubes.

The key architectural principle here is pushdown optimization: analytics queries run as close to the data as possible, inside IRIS, rather than pulling raw rows into an external analytics engine. This can reduce query times from minutes to seconds for enterprise-scale datasets.

Architecture Recommendations

  • Define business metrics and KPIs in the Adaptive Analytics semantic layer—not in your BI tool—to ensure consistency across dashboards
  • Use IRIS columnar storage for analytics-heavy tables while keeping transactional tables in row-based storage
  • Implement multi-model data architecture: relational tables for transactions, globals for hierarchical data, and vector tables for similarity search in AI applications
  • Enable streaming analytics via IRIS's built-in message broker to process event streams without leaving the platform

Best Practice #4: Deploy Models with PMML for Portability

Not every AI model will be born inside IRIS. Data scientists often build models in external environments—SageMaker, Azure ML, Google Vertex AI—and need to deploy them into operational systems. InterSystems supports PMML (Predictive Model Markup Language), an open standard for representing trained models.

Importing a PMML model into IRIS means predictions can be generated by the platform without maintaining a live connection to the external ML environment. This is particularly valuable in regulated industries where data residency requirements prevent sending records to cloud inference endpoints.

PMML Deployment Workflow

  • Export trained model as PMML XML from your ML platform of choice
  • Import into IRIS using the DeepSee PMML engine or the InterSystems PMML Utils library
  • Wrap the PMML inference call in an IRIS stored procedure for easy integration with existing application code
  • Monitor prediction drift by logging PMML outputs alongside actuals in an IRIS analytics table

Best Practice #5: Build GenAI Applications with Vector Search

Generative AI is redefining what's possible with enterprise data. InterSystems IRIS now supports vector embeddings natively, enabling semantic search, retrieval-augmented generation (RAG), and similarity-based recommendations directly within the platform—no external vector database required.

This is significant for real-time analytics: imagine a clinical decision support system that retrieves semantically similar patient cases at the moment a physician places an order, or a fraud detection engine that finds transactions matching known fraud patterns using embedding similarity rather than rigid rule matching.

Implementation Blueprint

  • Generate embeddings using models like OpenAI text-embedding-3-small or open-source alternatives (BAAI/bge, sentence-transformers)
  • Store embeddings in IRIS vector-type columns alongside your structured data
  • Use VECTOR_COSINE() or VECTOR_DOT_PRODUCT() SQL functions to run similarity queries inline with your analytics
  • For RAG applications, combine IRIS vector search with an LLM API call, passing retrieved context as part of the prompt

AI Integration Methods: Comparison at a Glance

Method

Best For

Latency

Data Movement

IntegratedML

Tabular classification/regression

Very Low

None

Python Gateway

Custom ML frameworks

Low

None

PMML Import

Pre-trained external models

Low

None

REST API (external)

Large LLMs, cloud models

Medium-High

Yes

Vector Search

Semantic/similarity queries

Very Low

None

 

Best Practice #6: Monitor, Retrain, and Govern AI Models Continuously

Production AI models degrade. Data distributions shift, business rules change, and models that were 95% accurate at deployment can slip to 70% within months. Real-time analytics environments are especially vulnerable because they're ingesting live, unpredictable data.

InterSystems IRIS provides the infrastructure for continuous model monitoring through its analytics and auditing capabilities. Build feedback loops that log predictions, compare them against actuals, and trigger retraining workflows when accuracy falls below defined thresholds.

Governance Checklist

  • Log every model prediction with input features, output score, and timestamp into an IRIS audit table
  • Set up automated drift detection using statistical tests (KS-test, PSI) on incoming feature distributions
  • Define retraining triggers: schedule-based (weekly), performance-based (accuracy < threshold), or event-based (data schema change)
  • Maintain model versioning in IRIS globals for rollback capability
  • Implement role-based access controls (RBAC) on model endpoints to ensure only authorized services can invoke AI predictions

Frequently Asked Questions

What is InterSystems IRIS IntegratedML?

IntegratedML is a declarative machine learning engine embedded directly in InterSystems IRIS. It lets developers train, validate, and deploy predictive models using SQL-like syntax, without moving data to an external ML platform. It's designed to reduce the complexity of bringing AI into production for developers who aren't data scientists.

How does InterSystems IRIS handle real-time AI inference?

IRIS runs AI inference in-process with the operational data, eliminating network round-trips to external ML services. Through IntegratedML, Python Gateway, and native PMML support, predictions are generated as part of SQL queries or application transactions—delivering millisecond latency at enterprise scale.

Can I use Python and TensorFlow with InterSystems IRIS?

Yes. The Python Gateway add-on enables direct Python execution within the IRIS environment. You can use any Python ML library—TensorFlow, PyTorch, scikit-learn, HuggingFace—and call models from ObjectScript or SQL. This allows teams to build models in familiar Python environments and deploy them without a separate inference microservice.

What are the limitations of IntegratedML?

IntegratedML is optimized for structured tabular data and standard ML tasks (classification, regression). It doesn't support custom neural network architectures, unstructured data like images or audio, or advanced techniques such as reinforcement learning. For these use cases, Python Gateway or external model integration via REST or PMML is recommended.

How does InterSystems IRIS support Generative AI applications?

IRIS supports GenAI through native vector storage and similarity search functions, enabling retrieval-augmented generation (RAG) workflows without a separate vector database. Teams can store embeddings alongside structured data, run semantic search queries in SQL, and combine results with external LLM API calls for applications like intelligent document retrieval or clinical decision support.

Is InterSystems IRIS suitable for healthcare AI analytics?

Yes, IRIS is widely adopted in healthcare, with purpose-built products like HealthShare and TrakCare built on the platform. It supports HL7 FHIR natively, provides HIPAA-compliant data handling, and integrates AI capabilities directly with clinical data—making it well-suited for predictive analytics in clinical and operational healthcare settings.

Final Thoughts

Integrating AI with InterSystems for real-time analytics isn't a single decision—it's a series of architectural choices that compound over time. Start with IntegratedML for fast time-to-value on structured prediction tasks. Layer in Python Gateway when your models outgrow declarative SQL. Embrace vector search as GenAI reshapes what enterprise applications can do.

The organizations winning with real-time AI aren't just faster—they're building systems where intelligence is inseparable from operations. InterSystems IRIS gives you the platform to do exactly that. The practices in this guide give you the roadmap.

讨论 (0)1
登录或注册以继续
问题
· 16 hr 前

Ensemble is not giving back ACK after ENQ and closes with EOT.

Hello Team,

I am currently working with the CD Ruby machine, which is connected through DIGI. When I click on the “Test Link” option on the instrument, I can see the following behavior in Wireshark logs:

Ensemble sends an ACK (06) after receiving ENQ (05), followed by EOT (04) (somewhat like above photo). However, when another ENQ is received, Ensemble does not send an ACK in response. As a result, the instrument displays a failure message.

Also attaching the Ensemble settings:

I am using a TCP service with an inbound adapter configured for the ASTM protocol. Is there a way to configure the system to send an ACK in response to every ENQ? Please let me know if any details is needed

2 条新评论
讨论 (2)3
登录或注册以继续
文章
· 22 hr 前 阅读大约需 6 分钟

APM - Usando o Monitor de Histórico do IRIS ou Caché

O APM normalmente se concentra na atividade da aplicação, mas coletar informações sobre o uso do sistema fornece dados de contexto importantes que ajudam a entender e gerenciar o desempenho da sua aplicação, portanto estou incluindo o Monitor de Histórico do IRIS nesta série.

Neste artigo, descreverei brevemente como iniciar o Monitor de Histórico do IRIS ou Caché para criar um registro da atividade em nível de sistema, complementando as informações de atividade e desempenho da aplicação que você coleta. Também apresentarei exemplos de SQL para acessar as informações.

O que é o Monitor de Histórico do IRIS ou Caché?

O Monitor de Histórico está disponível no IRIS ou nas plataformas anteriores Caché e Ensemble. Ele é uma extensão do Monitor do Sistema do IRIS ou Caché. Mantém um registro persistente das métricas relacionadas à atividade do banco de dados (por exemplo, leituras e atualizações de globais) e ao uso do sistema (por exemplo, uso de CPU).

Existem várias ferramentas para coletar essas estatísticas, algumas das quais entram em enorme nível de detalhe, mas podem ser difíceis de entender. O Monitor de Histórico foi projetado para ser simples de usar, para executar continuamente em sistemas em produção e exigir menos esforço e conhecimento especializado para compreender a saída.

Ele armazena informações em um pequeno número de tabelas horárias e diárias que podem ser facilmente consultadas com SQL. Portanto, se você iniciá-lo hoje, o registro histórico estará disponível quando você precisar. E, claro, você ainda pode complementar o registro histórico com uma coleta de dados mais detalhada quando tiver um problema para investigar.

Quais são os custos e benefícios?

O Monitor de Histórico é muito leve e não adicionará uma carga significativa ao sistema em execução. O espaço em disco utilizado também é muito pequeno, com o armazenamento somando cerca de 130 MB por ano, mesmo que você opte por estender o tempo de retenção das estatísticas horárias, conforme recomendado neste artigo.

É fácil de configurar e a saída não requer análise adicional para ser utilizada.

No dia em que você ativar o Monitor de Histórico, verá pouca ou nenhuma vantagem em relação a outras ferramentas que podem fornecer detalhes mais imediatos. O benefício aparece semanas ou meses depois, quando você estiver trabalhando em um plano de capacidade ou investigando um problema de desempenho.

Ele fornece um registro histórico de muitas métricas importantes, incluindo:

  • Uso de CPU
  • Tamanho dos arquivos de banco de dados e dos journals
  • Referências e atualizações de globais
  • Leituras e gravações físicas
  • Uso de licenças

Também registra um grande número de métricas técnicas mais detalhadas que podem ser úteis ao investigar mudanças no desempenho de uma aplicação.

Como acesso os dados armazenados pelo Monitor de Histórico?

Tabelas

As informações são armazenadas no namespace %SYS. Elas podem ser facilmente acessadas usando SQL e, portanto, podem ser analisadas com qualquer ferramenta popular de relatórios. Existem quatro tabelas principais diárias: SYS_History.Daily_DB, SYS_History.Daily_Sys, SYS_History.Daily_Perf e SYS_History.Daily_WD, que armazenam os resumos diários. Existem tabelas equivalentes que armazenam os resumos horários.

Campos diários e horários

As tabelas diárias e horárias possuem, respectivamente, um campo diário ou horário no formato ‘64491||14400’, em que as duas partes correspondem aos valores de data e hora em $h no momento em que o processo em segundo plano foi executado para gerar os dados. A parte referente ao horário não tem muito significado nas tabelas diárias.

Campo Element_key

Algumas tabelas incluem os valores médios e máximos observados em cada período de tempo, além do desvio padrão. O tipo de registro é indicado pelo valor do campo element_key.

Portanto, uma consulta típica para visualizar o crescimento no uso médio diário de CPU seria:

SELECT Substr(DS.Daily,1,5) as DateH, (100-DS.Sys_CPUIdle) as AvgCPUBusy

FROM SYS_History.Daily_SYS    DS

WHERE element_key='Avg'

ORDER BY DateH

Como configuro e inicio o Monitor de Histórico?

Abra uma sessão de terminal do Caché e altere para o namespace %SYS. Em seguida, execute o comando

Do ^%SYSMONMGR

Serão apresentados vários menus de caracteres. Digite os números para fazer as seguintes seleções:

  1. Manage Application Monitor

2) Manage Monitor Classes

Em seguida, selecione a opção de ativar duas vezes e informe os nomes das classes

                 1)Activate/Deactivate Monitor Class

                            %Monitor.System.HistoryPerf

    Yes

1) Activate/Deactivate Monitor Class

     %Monitor.System.HistorySys

     Yes

Existem várias outras classes, mas não se sinta tentado a ativá-las sem testar primeiro. Algumas utilizam o PERFMON e não são adequadas para execução em um sistema em produção.

Em seguida, utilize a opção de sair até retornar ao primeiro menu. Para ativar as alterações, pare e depois inicie o monitor do sistema a partir do primeiro menu.

1) Start/Stop System Monitor

1) Start/Stop System Monitor

O Monitor de Histórico será executado continuamente, mesmo que o sistema seja reiniciado.

Você pode querer manter as estatísticas horárias por mais tempo do que o padrão de 60 dias. Para isso, utilize o método SetPurge(). Por exemplo, para manter as estatísticas horárias por um ano:

%SYS>do ##class(SYS.History.Hourly).SetPurge(365)

Exemplo de SQL mais complexo

Para um exemplo mais complicado, se você também quiser a média diária de CPU e o uso médio de CPU entre 9h e 12h, além de informações sobre referências e atualizações de globais:

SELECT Substr(DS.Daily,1,5) Day,

           (100-DS.Sys_CPUIdle) as Daily_Avg_CPU,

            Round(AVG(100-H1.Sys_CPUIdle),2) Morning_Avg_CPU ,

            DP.Perf_GloRef, DP.Perf_GloUpdate

FROM SYS_History.Daily_Perf DP,

           SYS_History.Daily_SYS DS,

           SYS_History.Hourly_Sys H1

WHERE DP.Daily=DS.Daily and DP.element_key='Avg' and DS.element_key='Avg'

and H1.element_key='Avg'and substr(DS.Daily,1,5)=Substr(H1.Hourly,1,5)

and Substr(H1.Hourly,8,12) in (32400,36000,39600)

GROUP BY DS.daily

 

O que no meu sistema de teste resulta em …

Day

Daily_Avg_CPU

Morning_Avg_CPU

Perf_GloRef

Perf_GloUpdate

64514

13.64

2.25

99.03

7.73

64515

19.94

14.67

91.95

6.32

64516

8.79

12.14

102.21

6.91

64517

14.09

3.36

39729.06

5393.97

64518

20.26

25.11

15036.53

60.63

64519

9.27

15.5

3898.47

153.68

64520

5.54

1.78

87.94

5.65

64521

6.08

1.89

117.49

6.73

64524

17.8

16.81

70.8

5.14

 

 

 

 

 

Documentação

O Monitor de Histórico é descrito completamente na documentação.

讨论 (0)1
登录或注册以继续
问题
· 二月 17

Data Repoint NULLS Data In New Class Table When Old Class Property Is Changed & Saved

We are attempting to "Repoint" old class data to new class data to save disk space and data redundancy across multiple tables. This works to a point.  In essence the two classes are sharing the same data / Index / stream globals.  But if an ID in the Old_Class is opened, a property is modified, and saved the property that is in the New_Class (but not in the Old_Class) is NULLed / blanked.

Simplified explanation of data and what’s occurring.

Old Class Values   New Class Values
First_Name John   First_Name John
Middle_Initial Q   Middle_Initial Q
Last_Name Public   Last_Name Public
Date_Of_Birth 1/1/1965   Date_Of_Birth 1/1/1965
SSN 123-45-6789   SSN 123-45-6789
      Marital_Status Married

When the Old_Class is opened and any field in the Old_Class is modified and a %Save is run the Marital_Status becomes NULL / Blank.

Why does a %Save on the Old_Class result in a NULL / Blank value in the data storage global and in the New_Class table?  We have already checked Journaling and it sets a null into the $List position for "Marital_Status".

Is there a means to open the record via the Old_Class alter a property value, and perform a %Save without “losing” the value in the Marital_Status field in the New_Class?

10 条新评论
讨论 (10)3
登录或注册以继续
文章
· 二月 17 阅读大约需 12 分钟

Agentes de IA do Zero - Parte 2: Dando um Corpo ao Cérebro

cover

Na Parte 1, estabelecemos a base técnica do MAIS (Multi-Agent Interoperability Systems). Conectamos com sucesso o "Cérebro", construímos um Adapter robusto usando LiteLLM, protegemos nossas chaves de API com o IRIS Credentials e, finalmente, deciframos o código do quebra-cabeça da interoperabilidade com Python.

讨论 (0)1
登录或注册以继续