查找

文章
· 一月 16 阅读大约需 2 分钟

Adaptador de Arquivo Personalizado – Tabela de Consulta / Arquivos Dinâmicos

Meu problema era separar mensagens HL7 por tipo de mensagem. Eu precisava criar várias Operações de Arquivo. Com código personalizado, consegui usar 1 adaptador de arquivo para 1 interface e vários tipos de mensagem. Cheguei a experimentar extrair o MSH-4 do conteúdo bruto para acessar informações dinâmicas adicionais, mas isso pode trazer a necessidade de verificações de erro mais robustas e/ou ações padrão de consulta.

Seguindo a convenção de nomenclatura recomendada de "To_FILE_<IntegrationName>"

Eu decidi usar um nome e um caminho de arquivo genéricos nas configurações padrão.

Criei uma classe personalizada que estende EnsLib.File.OutboundAdapter, com um código customizado que me permite controlar dinamicamente o caminho do adaptador de arquivo específico para cada tipo de mensagem por meio de uma tabela de consulta (lookup table). Se eu não tiver um valor definido, o caminho genérico padrão será utilizado. Caso contrário, meu código sobrescreve o caminho e o nome do arquivo. O nome da lookup table pode ser qualquer um, desde que corresponda ao que está definido no código.


 

Código customizado

//SRC1 Extrair a 3ª parte do nome da operação de saída "<IntegrationName>"

//SRC2 Extrair a 1ª parte do NOME DO DOCTYPE "ORM" / "ADT" / "ORU" / etc.

// Definir uma nova variável SRC para concatenar SRC1_SRC2 juntos

//Nova tabela de lookup que irá controlar os nomes dos caminhos em um único lugar.

 

Set src1=$PIECE(..%ConfigName,"_",3,3)
Set src2=$PIECE(pDocument.DocTypeName,"_",1,1)
If src=""
{
Set src=src1_"_"_src2
} Set pFilename = ..Adapter.CreateFilename(##class(%File).GetFilename(src), $PIECE((##class(Ens.Rule.FunctionSet).Lookup("HL7FileNamePath",src)),"^",2,2)_..Filename) $$$TRACE(pFilename)
//Reset file path to return a file path based on the Lookup and PIECE function(s)
Set ..Adapter.FilePath =$PIECE((##class(Ens.Rule.FunctionSet).Lookup("HL7FileNamePath",src)),"^",1,1)
$$$TRACE(..Adapter.FilePath)
Set tSC = ..Adapter.open(pFilename) Quit:$$$ISERR(tSC) tSC
Set $ZT="Trap"
Use ..Adapter.Device  Set tSC=..OutputFramedToDevice(pDocument,pSeparators,"",0,..IOLogEntry,.pDoFraming) Use ..Adapter.OldIO
Set $ZT=""

讨论 (0)1
登录或注册以继续
文章
· 一月 16 阅读大约需 3 分钟

Minificando XML no IRIS

Em um projeto em que estou trabalhando, precisamos armazenar alguns XMLs arbitrários no banco de dados. Esse XML não tem nenhuma classe correspondente no IRIS; precisamos apenas armazená-lo como uma string (ele é relativamente pequeno e cabe em uma string).
Como existem MUITOS (milhões!) de registros no banco de dados, decidi reduzir o tamanho o máximo possível sem usar compressão. Sei que parte do XML a ser armazenado está indentada, parte não, isso varia.

Para reduzir o tamanho, decidi minificar o XML, mas como minificar um documento XML no IRIS?
Pesquisei em todas as classes e utilitários e não encontrei nenhum código ou método pronto, então tive que implementar por conta própria, e acabou sendo bem simples no IRIS usando a classe %XML.TextReader, sinceramente, mais simples do que eu esperava.

Como isso pode ser útil em outros contextos, decidi compartilhar esse pequeno utilitário com a Comunidade de Desenvolvedores.
Testei com alguns documentos XML relativamente complexos e funcionou bem, segue o código.

/// Minify an XML document passed in the XmlIn Stream, the minified XML is returned in XmlOut Stream
/// If XmlOut Stream is passed, then the minified XML is stored in the passed Stream, otherwise a %Stream.TmpCharacter in returned in XmlOut.
/// Collapse = 1 (default), empty elements are collapsed, e.g. <tag></tag> is returned as <tag/>
/// ExcludeComments = 1 (default), comments are not returned in the minified XML
ClassMethod MinifyXML(XmlIn As %Stream, ByRef XmlOut As %Stream = "", Collapse As %Boolean = 1, ExcludeComments As %Boolean = 1) As %Status
{
	#Include %occSAX
	Set sc=$$$OK
	Try {
		Set Mask=$$$SAXSTARTELEMENT+$$$SAXENDELEMENT+$$$SAXCHARACTERS+$$$SAXCOMMENT
		Set sc=##class(%XML.TextReader).ParseStream(XmlIn,.reader,,$$$SAXNOVALIDATION,Mask)
		#dim reader as %XML.TextReader
		If $$$ISERR(sc) Quit
		If '$IsObject(XmlOut) {
			Set XmlOut=##class(%Stream.TmpCharacter).%New()
		}
		While reader.Read() {
			Set type=reader.NodeType
			If ((type="error")||(type="fatalerror")) {
				Set sc=$$$ERROR($$$GeneralError,"Error loading XML "_type_"-"_reader.Value)
				Quit
			}
			If type="element" {
				Do XmlOut.Write("<"_reader.Name)
				If Collapse && reader.IsEmptyElement {
					; collapse empty element
					Do XmlOut.Write("/>")
					Set ElementEnded=1
				} Else {
					; add attributes
					For k=1:1:reader.AttributeCount {
						Do reader.MoveToAttributeIndex(k)
						Do XmlOut.Write(" "_reader.Name_"="""_reader.Value_"""")
					}
					Do XmlOut.Write(">")
				}
			} ElseIf type="chars" {
				Set val=reader.Value
				Do XmlOut.Write($select((val["<")||(val[">")||(val["&"):"<![CDATA["_$replace(val,"]]>","]]]]><![CDATA[>")_"]]>",1:val))
			} ElseIf type="endelement" {
				If $g(ElementEnded) {
					; ended by collapsing
					Set ElementEnded=0
				} Else {
					Do XmlOut.Write("</"_reader.Name_">")
				}
			} ElseIf 'ExcludeComments && (type="comment") {
				Do XmlOut.Write("<!--"_reader.Value_"-->")
			}
		}
	} Catch CatchError {
		#dim CatchError as %Exception.SystemException
		Set sc=CatchError.AsStatus()
	}
	Quit sc
}

P.S.: Alguém sabe se existe outra forma mais simples de minificar XML no IRIS?

讨论 (0)1
登录或注册以继续
文章
· 一月 16 阅读大约需 6 分钟

Maintaining FIFO Processing with Pool Size Greater Than 1

Introduction

The recent addition of FIFO groups allows First-In, First-Out (FIFO) message processing to be maintained in an interoperability production even when a Pool Size is greater than 1, enabling higher performance without sacrificing correctness. This feature first appears in InterSystems IRIS® data platform, InterSystems IRIS® for Health, and InterSystems Health Connect™ in version 2025.3.

First-In, First-Out message processing is critical in many integration scenarios, especially in healthcare. Traditionally, FIFO ordering is enforced by configuring each business host to process only one message at a time (Pool Size = 1). While effective, this approach can limit throughput and underutilize system resources. FIFO groups preserve FIFO ordering where needed without requiring a Pool Size of 1.


FIFO Groups: The Key to Parallelism with FIFO Ordering

FIFO groups provide a way to preserve ordering within a defined group of related messages, while still allowing parallel processing across unrelated groups.

Instead of enforcing FIFO globally, FIFO groups allow you to:

  • Maintain FIFO ordering for messages that depend on each other.
  • Process independent message streams concurrently.

How FIFO Groups Work

  1. Determine a FIFO Group Identifier Identify a value in the message that defines dependency such as patient MRN (Medical Record Number).  Choosing an appropriate FIFO group identifier is crucial. In healthcare a common example is using a patient ID, but any business-specific identifier can be used.
  2. Assign Messages to FIFO Groups Use a Data Transformation or custom code to assign each message a FIFO group identifier based on the value in the message. Messages with the same identifier are guaranteed to be processed in FIFO order.
  3. Enable Parallel Processing Across Groups Messages belonging to different FIFO groups can be processed at the same time. The Pool Size determines how many FIFO groups can be processed concurrently.
  4. Control When FIFO Constraints End Completion hosts can be defined to release FIFO constraints once ordering is no longer required downstream.

When to Use FIFO Groups

Consider FIFO groups when:

  • Pool Size = 1 does not meet performance requirements.
  • FIFO ordering is required for subsets of messages, not globally.
  • CPU utilization is below 80% capacity and parallelism would improve throughput.
  • Message dependencies can be clearly identified (such as by Patient ID, Account Number, or Order ID).

Example: FIFO Group Identifier is Patient ID and Pool Size = 2

Consider a healthcare message scenario with Pool Size = 2:

  • Patient 55, Patient 66 and Patient 77 each have a Pre-admit, Admit and Discharge message sent into the production in that order.
  • Pre-admit, Admit and Discharge messages for Patient 55, Patient 66 and Patient 77 must each be processed in order for that patient.
  • Messages for Patient 55, Patient 66 and Patient 77 are independent.

With FIFO groups:

  • All messages for each patient are processed sequentially.
  • One message for any patient can be processed in parallel with any other patient’s message.
    • For example, the Pre-admits for both Patient 55 and Patient 66 can be processed at the same time.
  • No patient’s message sequence is violated, even though two messages are being processed at the same time.

This approach provides correctness and performance.

For details about processing these messages for Patient 55, Patient 66 and Patient 77, refer to the A Closer Look at How FIFO Groups are Processed section at the end.


Configuring FIFO Group with Pool Size > 1 Example

Production description: Production accepts the data, transforms the data as needed for the downstream system, and sends data to the downstream system. Messages for each patient must be processed in the order received (FIFO).

Step 1: Identify the FIFO Group Identifier

  • Determine what defines message ordering. This value becomes the FIFO group identifier, for example, Patient ID, Account Number, or Order ID.

This example: MRN for the FIFO Group Identifier.

Step 2: Choose the Starting Host

  • Identify where FIFO grouping should begin:
    • Often a business process or routing engine.
    • This is the first business host where messages must be ordered.

This example: Example.MessageRouter

 

Step 3: Create a Data Transformation to compute the FIFO Group Calculation

  • Create a Data Transformation with:
    • Source: the request message type
    • Target: Ens.Queue.FIFOMessageGroup.Data
      • Create=new for the target message
  • Set the following required property:
    • Identifier A string derived from the message (for example, Patient ID).
  • Set any necessary optional properties. Refer to Defining FIFO Groups documentation for details regarding using these properties:
    • Dependencies (optional) Other FIFO groups that must complete first.
    • CompletionHosts (optional) Hosts responsible for releasing FIFO constraints.

This example:

 

Step 4: Configure the Starting Host

  • On the starting business host:
  • Set Group Calculation to the Data Transformation name.
  • This activates FIFO grouping for incoming messages.

This example:

 

Step 5: Set Pool Size > 1

  • Set Pool Size > 1 on the starting host.
  • Pool Size determines how many FIFO groups can process concurrently.
    • For example, Pool Size = 2 allows up to two FIFO groups to be active at once. This determines how many independent messages can be processed at one time.

This example:


Conclusion

By using FIFO groups, productions can safely increase Pool Size beyond 1, allowing parallel processing where possible while preserving strict ordering where necessary. This design is especially valuable in high-volume integrations, where both correctness and scalability are essential.


 A Closer Look at How FIFO Groups are Processed

This section reviews how the processing with FIFO groups works, step by step.

In the diagram below, the current message queue for ER_Router contains the messages in the order they were received from the business service. If ER_Router processes these messages in this exact order, that would preserve FIFO for all messages. However, with FIFO groups in place, ER_Router will not follow this exact order but will still preserve FIFO processing for related messages that require it. The following example illustrates how this works, one step at a time.

Below is an example of simultaneously processing messages for Patient 55, Patient 66 and Patient 77 while maintaining FIFO for each patient. Using Pool Size =2, two messages can be processed at the same time.

START

  • Currently processing two messages: Pre-admit for Patient 55 and Pre-admit for Patient 66.

 


NEXT

  • Pre-admit for Patient 66 completes and Pre-admit for Patient 55 is still being processed.

 

  • Find next message to process:
    • Dequeue Pre-admit for Patient 77 from the top of the queue. Valid to dequeue since no message is currently being processed for Patient 77.

 


NEXT

  • Pre-admit for Patient 77 completes and Pre-admit for Patient 55 is still being processed.

  • Find next message to process:
    • Cannot dequeue Admit or Discharge for Patient 55 from the top of the queue since a message for Patient 55 is still being processed.
      • This is the key to maintaining FIFO: a message cannot be dequeued if another message with the same FIFO group identifier is currently being processed.
    • Can dequeue Admit for Patient 77 since no message for Patient 77 is being processed.

 


NEXT

  • Pre-admit for Patient 55 completes and Admit for Patient 77 is still being processed.

  • Find next message to process:
    • Can dequeue Admit for Patient 55 from top of the queue since no message for Patient 55 is being processed.

 


NEXT

  • Admit for Patient 77 completes and Admit for Patient 55 is still being processed.

  • Find next message to process:
    • Cannot dequeue Discharge for Patient 55 from top of the queue since a message for Patient 55 is still being processed.
    • Can dequeue Admit for Patient 66 since no message for Patient 66 is being processed.

讨论 (0)1
登录或注册以继续
文章
· 一月 16 阅读大约需 2 分钟

Reviews do Open Exchange - #62

Se um dos seus pacotes no OEX receber uma avaliação, você será notificado pelo OEX apenas sobre o SEU próprio pacote.
A classificação reflete a experiência do avaliador com o estado encontrado no momento da avaliação.
É como um tipo de snapshot e pode ter mudado nesse meio-tempo.
Avaliações feitas por outros membros da comunidade são marcadas com * na última coluna.

Também fiz vários Pull Requests no GitHub quando encontrei algum problema que eu podia corrigir.
Alguns foram aceitos e incorporados, e outros simplesmente foram ignorados.
Portanto, se você fez uma mudança importante e espera uma avaliação atualizada, é só me avisar.

# Package Review Stars IPM Docker *
1 JSON2Persistent Now also available in IPM 6.0 y y  
2 one-to-many-case The first of 2026 5.0 y y  
3 IRIS_dockerization promising composition 4.4   y  
4 GlobalsDB-NodeJS-Admin Historic artefact 3.8      
5 iris-jsonschema just partial working 3.8      
6 GlobalsDB-Admin-NodeJS Historic artefact #2 3.5      
7 Pivot Partner product promotion 3.4      
8 DbVisualizer good product description 3.0      
9 PIQXL Gateway Good looking product promotion 2.7      
10 Symedical Partner promotion 2.4      
11 Fhirgure a lot of JS 1.4      
12 iknowAV Another artefact 1.2      
13 OMNI-Lab link to integrator 1.2      


NOTA:
Se alguma avaliação não estiver visível para você, pode ser que ainda esteja aguardando aprovação dos administradores do OEX.

讨论 (0)1
登录或注册以继续
公告
· 一月 16

[Video] Innovations in FHIR Data Management

Hey Community,

Enjoy the new video on InterSystems Developers YouTube:

⏯  Innovations in FHIR Data Management@ Ready 2025

Join us for an in-depth session on advancing FHIR data management by leveraging patient matching, clinical data enrichment, and AI decision support. We will explore how FHIR, MPIs, and AI have enhanced patient matching, streamlined interoperability and enabled comprehensive FHIR-based patient summaries at Méderi Hospital. Additionally, we’ll showcase AI-enabled clinical workflows and FHIR data mastering implementations at Stanford Health providing real-time AI responses for triage, CDS, care management, and operational planning to improve care quality.

Presenters:
🗣 @Elijah Cotterrell, Product Manager at InterSystems
🗣 @Kevin Kindschuh, Senior Sales Engineer at InterSystems
🗣 @Matías Fernández, Technical Specialist at InterSystems
🗣 @Bernardo Linarez, Senior Technical Lead at InterSystems
🗣 Satchi Mouniswamy, Sr. Director - Integration at Stanford Healthcare
🗣 Nikesh Kotecha, Head of Data Science at Stanford Health Care

Curious about best practices? Watch this and subscribe for more!

讨论 (0)1
登录或注册以继续