发布新帖

検索

公告
· 二月 18

Meet Ashok Kumar - New Developer Community Moderator!

Hi Community,

Please welcome @Ashok Kumar T as our new Moderator in the Developer Community Team! 🎉

Let's greet Ashok with a round of applause and look at his bio!

@Ashok Kumar T is a Senior Software Engineer. 

A few words from Ashok: 

I'm a senior Software Engineer with over a decade of experience specializing in the InterSystems technology stack. Since 2014, my focus has been on leveraging the full power of the InterSystems ecosystem to solve complex data and integration challenges. I bring a deep understanding of both ObjectScript and modern IRIS implementations.

My professional philosophy is rooted in a commitment to core values: a constant willingness to learn and a proactive approach to sharing knowledge.

WARM WELCOME!

Thank you and congratulations, @Ashok Kumar T 👏

We're glad to have you on our moderators' team!

10 条新评论
讨论 (10)5
登录或注册以继续
文章
· 二月 18 阅读大约需 6 分钟

pyprod: Pure Python IRIS Interoperability

Intersystems IRIS Productions provide a powerful framework for connecting disparate systems across various protocols and message formats in a reliable, observable, and scalable manner. intersystems_pyprod, short for InterSystems Python Productions, is a Python library that enables developers to build these interoperability components entirely in Python. Designed for flexibility, it supports a hybrid approach: you can seamlessly mix new Python-based components with existing ObjectScript-based ones, leveraging your established IRIS infrastructure. Once defined, these Python components are managed just like any other; they can be added, configured, and connected using the IRIS Production Configuration page. 


A Quick Primer on InterSystems IRIS Productions

Key Elements of a Production

Image from Learning Services training material

An IRIS Production generally receives data from external interfaces, processes it through coordinated steps, and routes it to its destination. As messages move through the system, they are automatically persisted, making the entire flow fully traceable through IRIS’s visual trace and logging tools. The architecture relies on certain key elements:

  1. Business Hosts: These are the core building blocks—Services, Processes, and Operations—that pass persistable messages between one another.
  2. Adapters: Inbound and outbound adapters manage the interaction with the external world, handling the specific protocols needed to receive and send data.
  3. Callbacks: The engine uses specific callback methods to pass messages between hosts, either synchronously or asynchronously. These callbacks follow strict signatures and return a Status object to ensure execution integrity.
  4. Configuration Helpers: Objects such as Properties and Parameters expose settings to the Production Configuration UI, allowing users to easily instantiate, configure, and save the state of these components.

Workflow using pyprod

This is essentially a 3 step process.

  1. Write your production components in a regular Python script. In that script, you import the required base classes from intersystems_pyprod and define your own components by subclassing them, just as you would with any other Python library.
  2. Load them into InterSystems IRIS by running the intersystems_pyprod (same name as the library) command from the terminal and passing it the path to your Python script. This step links the Python classes with IRIS so that they appear as production components and can be configured and wired together using the standard Production Configuration UI. 
  3. Create the production using the Production Configuration page and start the Production

NOTE: If you create all your components with all their Properties hardcoded within the python script, you only need to add them to the production and start the Production. 

You can connect pyprod to your IRIS instance by doing a one time setup


Simple Example

In this example, we demonstrate a synchronous message flow where a request originates from a Service, moves through a Process, and is forwarded to an Operation. The resulting response then travels the same path in reverse, passing from the Operation back through the Process to the Service. Additionally, we showcase how to utilize the IRISLog utility to write custom log entries.

Step 1

Create your Production components using pyprod in the file HelloWorld.py

Here are some key parts of the code

  • Package Naming: We define iris_package_name, which prefixes all classes as they appear on the Production Configuration page (If omitted, the script name is used as the default prefix).
  • Persistable Messages: We define MyRequest and MyResponse. These are the essential data structures for communication, as only persistable objects can be passed between Services, Processes, and Operations.
  • The Inbound Adapter: Our adapter passes a string to the Service using the business_host_process_input method.
  • The Business Service: Implemented with the help of OnProcessInput callback.
    • MyService receives data from the adapter and converts it into a MyRequest message
    • We use the ADAPTER IRISParameter to link the Inbound Adapter to the Service. Note that this attribute must be named ADAPTER in all caps to align with IRIS conventions.
    • We define a target IRISProperty, which allows users to select the destination component directly via the Configuration UI.
  • The Business Process: Implemented with the help of OnRequest callback.
  • The Business Operation: Implemented with the help of OnMessage callback. (You can also define a MessageMap)
  • Logic & Callbacks: Finally, the hosts implement their core logic within standard callbacks like OnProcessInput and OnRequest, routing messages using the SendRequestSync method.

You can read more about each of these parts on the pyprod API Reference page and also using the Quick Start Guide.

import time

from intersystems_pyprod import (
    InboundAdapter,BusinessService, BusinessProcess, 
    BusinessOperation, OutboundAdapter, JsonSerialize, 
    IRISProperty, IRISParameter, IRISLog, Status)

iris_package_name = "helloworld"
class MyRequest(JsonSerialize):
    content: str

class MyResponse(JsonSerialize):
    content: str

class MyInAdapter(InboundAdapter):
    def OnTask(self):
        time.sleep(0.5)
        self.business_host_process_input("request message")
        return Status.OK()

class MyService(BusinessService):
    ADAPTER = IRISParameter("helloworld.MyInAdapter")
    target = IRISProperty(settings="Target")
    def OnProcessInput(self, input):
        persistent_message = MyRequest(input)
        status, response = self.SendRequestSync(self.target, persistent_message)
        IRISLog.Info(response.content)
        return status

class MyProcess(BusinessProcess):
    target = IRISProperty(settings="Target")
    def on_request(self, input):
        status, response = self.SendRequestSync(self.target,input)
        return status, response


class MyOperation(BusinessOperation):
    ADAPTER = IRISParameter("helloworld.MyOutAdapter")
    def OnMessage(self, input):
        status = self.ADAPTER.custom_method(input)
        response = MyResponse("response message")
        return status, response


class MyOutAdapter(OutboundAdapter):
    def custom_method(self, input):
        IRISLog.Info(input.content)
        return Status.OK()

 

Step 2

Once your code is ready, load the components to IRIS.

$ intersystems_pyprod /full/path/to/HelloWorld.py

    Loading MyRequest to IRIS...
    ...
    Load finished successfully.
    
    Loading MyResponse to IRIS...
    ...
    Load finished successfully.
    ...
    

Step 3

Add each host to the Production using the Production Configuration page.

The image below shows MyService and its target property being configured through the UI. Follow the same process to add MyProcess and MyOperation. Once the setup is complete, simply start the production to see your messages in motion.


Final Thoughts

By combining the flexibility of the Python ecosystem with the industrial-grade reliability of InterSystems IRIS, pyprod offers a modern path for building interoperability solutions. Whether you are developing entirely new "Pure Python" productions or enhancing existing ObjectScript infrastructures with specialized Python libraries, pyprod ensures your components remain fully integrated, observable, and easy to configure. We look forward to seeing what you build!


Quick Links

GitHub repository  

PyPi Package

Support the Project: If you find this library useful, please consider giving us a ⭐ on GitHub and suggesting enhancements. It helps the project grow and makes it easier for other developers in the InterSystems community to discover it!
讨论 (0)1
登录或注册以继续
文章
· 二月 18 阅读大约需 7 分钟

Cómo añadir fácilmente una validación contra las especificaciones OpenAPI a vuestras APIs REST

En este artículo, pretendo demostrar un par de métodos para añadir fácilmente validación a las APIs REST en InterSystems IRIS Data Platform. Creo que un enfoque specification-first es una idea excelente para el desarrollo de APIs. IRIS ya dispone de funcionalidades para generar un esqueleto de implementación a partir de una especificación y publicar esa especificación para desarrolladores externos (usadlo junto con iris-web-swagger-ui para obtener los mejores resultados). Lo único importante que aún no está implementado en la plataforma es el validador de solicitudes. ¡Vamos a solucionarlo!

La tarea es la siguiente: todas las solicitudes entrantes deben validarse contra el esquema de la API descrito en formato OpenAPI. Como sabéis, la solicitud contiene: método (GET, POST, etc.), URL con parámetros, cabeceras (Content-Type, por ejemplo) y cuerpo (algún JSON). Todo ello puede comprobarse. Para resolver esta tarea, utilizaré Embedded Python, ya que la amplia biblioteca de código abierto en Python ya cuenta con dos proyectos adecuados: openapi-core y openapi-schema-validator. Una limitación aquí es que IRIS está utilizando Swagger 2.0, una versión obsoleta de OpenAPI. La mayoría de las herramientas no son compatibles con esta versión, por lo que la primera implementación de nuestro validador se limitará a comprobar únicamente el cuerpo de la solicitud.

Solución basada en openapi-schema-validator

Entradas clave:

  • La solución es totalmente compatible con el enfoque specification-first recomendado por InterSystems para el desarrollo de APIs. No necesitáis modificar las clases de la API generadas, salvo un pequeño detalle, del que hablaré más adelante.
  • Solo se valida el cuerpo de la solicitud.
  • Necesitamos extraer la definición del tipo de solicitud desde la especificación OpenAPI (clase spec.cls).
  • La correspondencia entre el JSON de la solicitud y la definición de la especificación se realiza estableciendo un tipo de contenido específico del proveedor.

Primero, necesitáis establecer un tipo de contenido específico del proveedor en la propiedad consumes de la especificación OpenAPI para vuestro endpoint. Debe tener un formato parecido a este: vnd.<company>.<project>.<api>.<request_type>+json. Por ejemplo, yo usaré:

"paths":{
      "post":{
        "consumes":[
          "application/vnd.validator.sample_api.test_post_req+json"
        ],
...

A continuación, necesitamos una clase base para nuestra clase de dispatch. Aquí está el código completo de esta clase; el código también está disponible en Git.

Class SwaggerValidator.Core.REST Extends %CSP.REST
{

Parameter UseSession As Integer = 1;
ClassMethod OnPreDispatch(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean) As %Status
{
	Set tSC = ..ValidateRequest()
    
    If $$$ISERR(tSC) {
        Do ..ReportHttpStatusCode(##class(%CSP.REST).#HTTP400BADREQUEST, tSC)
        Set pContinue = 0
    }

    Return $$$OK
}

ClassMethod ValidateRequest() As %Status
{
    Set tSC = ##class(%REST.API).GetApplication($REPLACE($CLASSNAME(),".disp",""), .spec)
    Return:$$$ISERR(tSC) tSC

    Set defName = $PIECE($PIECE(%request.ContentType, "+", 1), ".", *)
    Return:defName="" $$$ERROR($$$GeneralError, $$$FormatText("No definition name found in Content-Type = %1", %request.ContentType))
    
    Set type = spec.definitions.%Get(defName)
    Return:type="" $$$ERROR($$$GeneralError, $$$FormatText("No definition found in specification by name = %1", defName))
    
    Set schema = type.%ToJSON() 
    Set body = %request.Content.Read()

    Try {Set tSC = ..ValidateImpl(schema, body)} Catch ex {Set tSC = ex.AsStatus()}

    Return tSC
}

ClassMethod ValidateImpl(schema As %String, body As %String) As %Status [ Language = python ]
{
    try:
        validate(json.loads(body), json.loads(schema))
    except Exception as e:
        return iris.system.Status.Error(5001, f"Request body is invalid: {e}")

    return iris.system.Status.OK()
}

XData %import [ MimeType = application/python ]
{
import iris, json
from openapi_schema_validator import validate
}

}

Aquí estamos haciendo las siguientes cosas:

  1. Se sobrescribe OnPreDispatch() para añadir la validación. Este código se ejecutará en cada llamada a nuestra API.
  2. Se utiliza##class(%REST.API).GetApplication()para obtener la especificación en un objeto dinámico (JSON).
  3. Se extrae el nombre de la definición desde la cabecera Content-Type.
  4. Se obtiene el esquema de la solicitud mediante el nombre de la definición: spec.definitions.%Get(defName)
  5. Se envían el esquema de la solicitud y el cuerpo de la solicitud al código Python para su validación.

Como veis, todo es bastante sencillo. Ahora solo necesitáis cambiar la sección Extends de vuestra disp.clsa SwaggerValidator.Core.REST. Y, por supuesto, instalar la librería Python openapi-schema-validatoren el servidor (tal como se describe aquí).

Solución basada en openapi-core

Entradas clave:

  • Esta solución funciona con una interfaz REST codificada a mano. No usamos herramientas de API Management para generar el código a partir de la especificación OpenAPI. Solo tenemos un servicio REST como subclase de %CSP.REST.
  • Por lo tanto, no estamos limitados a la versión 2.0/JSON y utilizaremos OpenAPI 3.0 en formato YAML. Esta versión ofrece más posibilidades, y encuentro que YAML es más legible.
  • Se comprobarán los siguientes elementos: parámetros de ruta y consulta en la URL, Content-Type y cuerpo de la solicitud.

Para empezar, tomemos nuestra especificación ubicada en <servidor>/api/mgmnt/v1/<namespace>/spec/<aplicación-web>. Sí, tenemos una especificación OpenAPI generada incluso para APIs REST codificadas manualmente. Esta no es una especificación completa porque no contiene los esquemas de solicitudes y respuestas (el generador no sabe de dónde obtenerlos). Pero la plataforma ya ha hecho la mitad del trabajo por nosotros. Coloquemos esta especificación en un bloque XData llamado OpenAPI   en la clase Spec.cls A continuación, necesitamos convertir la especificación a formato OpenAPI 3.0/YAML y añadir definiciones para solicitudes y respuestas. Podéis usar un convertidor o simplemente pedirlo a Codex:

Por favor, convertid la especificación de la clase @Spec.clsa la versión Swagger 3.0 y al formato YAML.

De la misma manera, podemos pedir a Codex que genere los esquemas de solicitudes/respuestas basándose en ejemplos JSON.

Por cierto, el vibe coding funciona bastante bien en el desarrollo con IRIS, pero eso es un tema para otra ocasión. ¡Decidme si os resulta interesante!

Como en la solución anterior, debemos crear una clase base para nuestro %CSP.REST. Esta clase es muy similar:

Class SwaggerValidator.Core.RESTv2 Extends %CSP.REST
{

Parameter UseSession As Integer = 1;
ClassMethod OnPreDispatch(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean) As %Status
{
	Set tSC = ..ValidateRequest()
    
    If $$$ISERR(tSC) {
        Do ..ReportHttpStatusCode(##class(%CSP.REST).#HTTP400BADREQUEST, tSC)
        Set pContinue = 0
    }

    Return $$$OK
}

ClassMethod ValidateRequest() As %Status
{
    Set tSC = ..GetSpec(.swagger) 
    Return:$$$ISERR(tSC)||(swagger="") tSC

    Set canonicalURI = %request.CgiEnvs("REQUEST_SCHEME")_"://"_%request.CgiEnvs("HTTP_HOST")_%request.CgiEnvs("REQUEST_URI")
    Set httpBody = $SELECT($ISOBJECT(%request.Content)&&(%request.Content.Size>0):%request.Content.Read(), 1:"")
    Set httpMethod = %request.CgiEnvs("REQUEST_METHOD")
    Set httpContentType = %request.ContentType
    Try {
        Set tSC = ..ValidateImpl(swagger, canonicalURI, httpMethod, httpBody, httpContentType)
    } Catch ex {
        Set tSC = ex.AsStatus()
    }

    Return tSC
}

/// The class Spec.cls must be located in the same package as the %CSP.REST implementation
/// The class Spec.cls must contain an XData block named 'OpenAPI' with swagger 3.0 specification (in YAML format) 
ClassMethod GetSpec(Output specification As %String, xdataName As %String = "OpenAPI") As %Status
{
    Set specification = ""
    Set specClassName = $CLASSNAME()
    Set $PIECE(specClassName, ".", *) = "Spec"
    Return:'##class(%Dictionary.ClassDefinition).%Exists($LISTBUILD(specClassName)) $$$OK
    Set xdata = ##class(%Dictionary.XDataDefinition).%OpenId(specClassName_"||"_xdataName,,.tSC)
    If $$$ISOK(tSC),'$ISOBJECT(xdata)||'$ISOBJECT(xdata.Data)||(xdata.Data.Size=0) {
		Set tSC = $$$ERROR($$$RESTNoRESTSpec, xdataName, specClassName)
	}
    Return:$$$ISERR(tSC) tSC
    
    Set specification = xdata.Data.Read()
    Return tSC
}

ClassMethod ValidateImpl(swagger As %String, url As %String, method As %String, body As %String, contentType As %String) As %Status [ Language = python ]
{
    spec = Spec.from_dict(yaml.safe_load(swagger))
    data = json.loads(body) if (body != "") else None
    headers = {"Content-Type": contentType}
    
    req = requests.Request(method=method, url=url, json=data, headers=headers).prepare()
    openapi_req = RequestsOpenAPIRequest(req)

    try:
        validate_request(openapi_req, spec=spec)
    except Exception as ex:
        return iris.system.Status.Error(5001, f"Request validation failed: {ex.__cause__ if ex.__cause__ else ex}")

    return iris.system.Status.OK()
}

XData %import [ MimeType = application/python ]
{
import iris, json, requests, yaml
from openapi_core import Spec, validate_request
from openapi_core.contrib.requests import RequestsOpenAPIRequest
}

}

A tener en cuenta: una clase que contenga la especificación debe llamarse Spec.cls y estar ubicada en el mismo paquete que vuestra implementación %CSP.REST. La clase de especificación se ve así:

Class Sample.API.Spec Extends %RegisteredObject
{

XData OpenAPI [ MimeType = application/yaml ]
{
    ... your YAML specification ...
}
}

Para habilitar la validación, solo necesitáis extender vuestra clase de API heredando de SwaggerValidator.Core.RESTv2 y colocar el archivo Spec.cls junto a ella.

Eso es todo lo que quería contaros sobre la validación con Swagger. No dudéis en hacerme preguntas.

讨论 (0)1
登录或注册以继续
文章
· 二月 18 阅读大约需 9 分钟

Best Practices for Integrating AI with InterSystems for Real-Time Analytics

Your data pipelines are running. Your dashboards are live. But if AI isn't embedded directly into your InterSystems environment, you're leaving the most valuable insights sitting idle—processed too slowly to act on.

Real-time analytics isn't just about speed anymore. It's about intelligence at the point of action. InterSystems IRIS, the company's flagship data platform, is purpose-built to close the gap between raw operational data and AI-driven decisions—without the latency tax of moving data to an external ML system.

 

In this guide, we break down the proven best practices for integrating AI with InterSystems for real-time analytics, covering everything from architecture patterns to model deployment strategies that actually hold up in production.

Quick Overview

Aspect

Details

Platform

InterSystems IRIS Data Platform

Core Capability

Embedded AI/ML with real-time transactional + analytics workloads

Key Use Cases

Healthcare analytics, financial fraud detection, supply chain optimization

AI Integration Methods

IntegratedML, Python Gateway, PMML, REST API endpoints

Target Users

Data engineers, AI architects, enterprise developers

 

Why InterSystems IRIS for AI-Driven Real-Time Analytics

Most enterprise analytics architectures suffer from a common architectural flaw: the AI layer lives outside the data layer. Data has to travel—from operational databases to data lakes, through transformation pipelines, into ML platforms—before a prediction can be made. By then, the moment has often passed.

InterSystems IRIS takes a fundamentally different approach. It combines transactional processing (OLTP), analytics (OLAP), and AI/ML capabilities in a single, unified platform. This convergence isn't just a convenience—it's a performance breakthrough. According to InterSystems, IRIS can ingest and analyze millions of events per second while simultaneously running machine learning models against that live data.

The result: AI predictions generated in milliseconds, not minutes. For industries where the cost of latency is measured in lives (healthcare) or dollars (financial services), this architecture is a game-changer.

Best Practice #1: Use IntegratedML for In-Database Model Training

IntegratedML is InterSystems' declarative machine learning engine built directly into IRIS SQL. Rather than extracting data to an external Python or R environment, train and deploy models with SQL-style commands inside the database itself.

This approach eliminates the data movement overhead that plagues traditional ML pipelines. A model trained on 10 million patient records doesn't need to be serialized, transferred, and deserialized—it runs where the data lives.

How to Implement

  • Create a model: CREATE MODEL PatientRisk PREDICTING (RiskScore) FROM PatientData
  • Train with a single command: TRAIN MODEL PatientRisk
  • Generate predictions inline: SELECT PREDICT(PatientRisk) AS Risk FROM PatientData WHERE PatientID = 12345

Best practice: Use IntegratedML for structured tabular data where speed-to-deployment matters more than custom model architectures. For deep learning or custom neural networks, leverage Python Gateway instead.

Best Practice #2: Leverage Python Gateway for Advanced ML Frameworks

IntegratedML handles a wide range of classification and regression problems, but enterprise AI often demands more—custom neural networks, NLP pipelines, reinforcement learning, or computer vision models built in TensorFlow, PyTorch, or scikit-learn.

InterSystems Python Gateway solves this by embedding Python execution natively within IRIS. Instead of building a separate microservice to run your Python models, you call them directly from ObjectScript or via SQL stored procedures. The data never leaves the IRIS environment.

Key Implementation Tips

  • Install Python Gateway as a IRIS add-on and configure your Python environment path in the IRIS Management Portal
  • Use the IRISNative API to pass IRIS globals directly into Python objects—eliminating serialization overhead
  • Cache frequently used model objects in memory using IRIS's built-in caching layer to avoid re-loading on every prediction request
  • For high-throughput scenarios, deploy models as persistent Python processes rather than loading them per-request

For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like

For organizations looking to accelerate production-grade AI deployment across complex enterprise environments, teams like Denebrix AI provide structured implementation support that spans model development, IRIS integration, and real-time pipeline validation.

Best Practice #3: Architect for Low-Latency with Adaptive Analytics

Real-time analytics requires more than fast data retrieval—it demands an architecture that adapts to changing data distributions. InterSystems Adaptive Analytics (powered by AtScale) bridges IRIS with BI tools like Tableau, Power BI, and Looker, providing a semantic layer that enables live, in-memory analytical queries without pre-aggregating data into cubes.

The key architectural principle here is pushdown optimization: analytics queries run as close to the data as possible, inside IRIS, rather than pulling raw rows into an external analytics engine. This can reduce query times from minutes to seconds for enterprise-scale datasets.

Architecture Recommendations

  • Define business metrics and KPIs in the Adaptive Analytics semantic layer—not in your BI tool—to ensure consistency across dashboards
  • Use IRIS columnar storage for analytics-heavy tables while keeping transactional tables in row-based storage
  • Implement multi-model data architecture: relational tables for transactions, globals for hierarchical data, and vector tables for similarity search in AI applications
  • Enable streaming analytics via IRIS's built-in message broker to process event streams without leaving the platform

Best Practice #4: Deploy Models with PMML for Portability

Not every AI model will be born inside IRIS. Data scientists often build models in external environments—SageMaker, Azure ML, Google Vertex AI—and need to deploy them into operational systems. InterSystems supports PMML (Predictive Model Markup Language), an open standard for representing trained models.

Importing a PMML model into IRIS means predictions can be generated by the platform without maintaining a live connection to the external ML environment. This is particularly valuable in regulated industries where data residency requirements prevent sending records to cloud inference endpoints.

PMML Deployment Workflow

  • Export trained model as PMML XML from your ML platform of choice
  • Import into IRIS using the DeepSee PMML engine or the InterSystems PMML Utils library
  • Wrap the PMML inference call in an IRIS stored procedure for easy integration with existing application code
  • Monitor prediction drift by logging PMML outputs alongside actuals in an IRIS analytics table

Best Practice #5: Build GenAI Applications with Vector Search

Generative AI is redefining what's possible with enterprise data. InterSystems IRIS now supports vector embeddings natively, enabling semantic search, retrieval-augmented generation (RAG), and similarity-based recommendations directly within the platform—no external vector database required.

This is significant for real-time analytics: imagine a clinical decision support system that retrieves semantically similar patient cases at the moment a physician places an order, or a fraud detection engine that finds transactions matching known fraud patterns using embedding similarity rather than rigid rule matching.

Implementation Blueprint

  • Generate embeddings using models like OpenAI text-embedding-3-small or open-source alternatives (BAAI/bge, sentence-transformers)
  • Store embeddings in IRIS vector-type columns alongside your structured data
  • Use VECTOR_COSINE() or VECTOR_DOT_PRODUCT() SQL functions to run similarity queries inline with your analytics
  • For RAG applications, combine IRIS vector search with an LLM API call, passing retrieved context as part of the prompt

AI Integration Methods: Comparison at a Glance

Method

Best For

Latency

Data Movement

IntegratedML

Tabular classification/regression

Very Low

None

Python Gateway

Custom ML frameworks

Low

None

PMML Import

Pre-trained external models

Low

None

REST API (external)

Large LLMs, cloud models

Medium-High

Yes

Vector Search

Semantic/similarity queries

Very Low

None

 

Best Practice #6: Monitor, Retrain, and Govern AI Models Continuously

Production AI models degrade. Data distributions shift, business rules change, and models that were 95% accurate at deployment can slip to 70% within months. Real-time analytics environments are especially vulnerable because they're ingesting live, unpredictable data.

InterSystems IRIS provides the infrastructure for continuous model monitoring through its analytics and auditing capabilities. Build feedback loops that log predictions, compare them against actuals, and trigger retraining workflows when accuracy falls below defined thresholds.

Governance Checklist

  • Log every model prediction with input features, output score, and timestamp into an IRIS audit table
  • Set up automated drift detection using statistical tests (KS-test, PSI) on incoming feature distributions
  • Define retraining triggers: schedule-based (weekly), performance-based (accuracy < threshold), or event-based (data schema change)
  • Maintain model versioning in IRIS globals for rollback capability
  • Implement role-based access controls (RBAC) on model endpoints to ensure only authorized services can invoke AI predictions

Frequently Asked Questions

What is InterSystems IRIS IntegratedML?

IntegratedML is a declarative machine learning engine embedded directly in InterSystems IRIS. It lets developers train, validate, and deploy predictive models using SQL-like syntax, without moving data to an external ML platform. It's designed to reduce the complexity of bringing AI into production for developers who aren't data scientists.

How does InterSystems IRIS handle real-time AI inference?

IRIS runs AI inference in-process with the operational data, eliminating network round-trips to external ML services. Through IntegratedML, Python Gateway, and native PMML support, predictions are generated as part of SQL queries or application transactions—delivering millisecond latency at enterprise scale.

Can I use Python and TensorFlow with InterSystems IRIS?

Yes. The Python Gateway add-on enables direct Python execution within the IRIS environment. You can use any Python ML library—TensorFlow, PyTorch, scikit-learn, HuggingFace—and call models from ObjectScript or SQL. This allows teams to build models in familiar Python environments and deploy them without a separate inference microservice.

What are the limitations of IntegratedML?

IntegratedML is optimized for structured tabular data and standard ML tasks (classification, regression). It doesn't support custom neural network architectures, unstructured data like images or audio, or advanced techniques such as reinforcement learning. For these use cases, Python Gateway or external model integration via REST or PMML is recommended.

How does InterSystems IRIS support Generative AI applications?

IRIS supports GenAI through native vector storage and similarity search functions, enabling retrieval-augmented generation (RAG) workflows without a separate vector database. Teams can store embeddings alongside structured data, run semantic search queries in SQL, and combine results with external LLM API calls for applications like intelligent document retrieval or clinical decision support.

Is InterSystems IRIS suitable for healthcare AI analytics?

Yes, IRIS is widely adopted in healthcare, with purpose-built products like HealthShare and TrakCare built on the platform. It supports HL7 FHIR natively, provides HIPAA-compliant data handling, and integrates AI capabilities directly with clinical data—making it well-suited for predictive analytics in clinical and operational healthcare settings.

Final Thoughts

Integrating AI with InterSystems for real-time analytics isn't a single decision—it's a series of architectural choices that compound over time. Start with IntegratedML for fast time-to-value on structured prediction tasks. Layer in Python Gateway when your models outgrow declarative SQL. Embrace vector search as GenAI reshapes what enterprise applications can do.

The organizations winning with real-time AI aren't just faster—they're building systems where intelligence is inseparable from operations. InterSystems IRIS gives you the platform to do exactly that. The practices in this guide give you the roadmap.

讨论 (0)1
登录或注册以继续
问题
· 二月 17

Ensemble is not giving back ACK after ENQ and closes with EOT.

Hello Team,

I am currently working with the CD Ruby machine, which is connected through DIGI. When I click on the “Test Link” option on the instrument, I can see the following behavior in Wireshark logs:

Ensemble sends an ACK (06) after receiving ENQ (05), followed by EOT (04) (somewhat like above photo). However, when another ENQ is received, Ensemble does not send an ACK in response. As a result, the instrument displays a failure message.

Also attaching the Ensemble settings:

I am using a TCP service with an inbound adapter configured for the ASTM protocol. Is there a way to configure the system to send an ACK in response to every ENQ? Please let me know if any details is needed

3 条新评论
讨论 (3)3
登录或注册以继续