发布新帖

查找

文章
· 六月 9, 2023 阅读大约需 15 分钟

EMPI installation and customization in Standalone - Customization of patient data

As we said yesterday... our EMPI can receive data from multiple sources, REST, HL7 messaging, etc. But it is possible that the standard fields are not enough and we want to expand the patient information to help discriminate and uniquely identify them. How could we customize patient data? Modifying the standard classes to our liking? NOOOOO!!!! Well, a little yes, but not like crazy, because if we modify standard classes carelessly we may find that in a future update we lose all these modifications.

Customization of patient data

Muy bien, continuemos con la configuración de nuestro EMPI en el que hemos introducido unos cuantos pacientes. Imaginad que hemos decidido incluir el grupo sanguíneo como un criterio para ayudar en el proceso del cálculo de pesos. Recordemos que clases definían los datos del paciente.

Por un lado tenemos el objeto que vamos a persistir y que lo encontramos en el Definition Designer:

On the other hand, we have the object that we are going to return in the searches and that corresponds to the "Composite Record" previously configured from the Configuration Registry option.

 

Let's take a look at HSPI.Data.Patient, specifically the class declaration:

Class HSPI.Data.Patient Extends (%Persistent, HSPI.Data.Type.Patient, %MPRL.Linkage.Base) 

We can see that its properties are inherited from HSPI.Data.Type.Patient, if we access to this class we will see that we have a series of values corresponding to the properties of the patient object. As we have said, we could add the fields here that we need... but that is NOT RECOMMENDED, we have a better option that will not be affected by version updates. If we go to the end of the HSPI.Data.Type.Patient class we find the following property:

Property Extension As HS.Local.SDA3.PatientExtension;

This property is what will allow us to add as many fields as we want to our patient object. As you can see, belonging to the Local will prevent this information from being lost in future version changes. Let's open HS.Local.SDA3.PatientExtension and include the BloodType property as a simple String:

Class HS.Local.SDA3.PatientExtension Extends HS.SDA3.DataType
{

Parameter HSDEPLOY = 0;
Parameter STREAMLETCLASS = "HS.SDA3.Streamlet.Patient";
Property BloodType As %String;
}

Perfect, our object is prepared to receive this information. How can we send this new information to our system? As we saw in the previous article, our EMPI has a default configuration that can receive HL7 messages and that we have already used to add patients.

Create new HL7 schema

In our example we will use an A28 message to which we will add a new segment with the blood type information. To do this we are going to create a new schema based on the 2.5.1 standard schema and we are going to create a new DocType for the A28 message type.

Here we have our new HealthShare_2.5 schema that, as you can see, extends the 2.5.1 standard and we have created a new message structure of type A28. To add a new segment we will first create said segment that we will call ZBT and add the field that will contain the blood type:

With the segment created, we only need to include it in the message structure and we can start working with these personalized messages:

Perfect, message set up with our ZBT segment. Now let's see an example of the type of message that we will receive with the new segment:

MSH|^~\&|HIS|HULP|EMPI||20230329085239||ADT^A28|72104|P|HealthShare_2.5
EVN|A28|20230329085239|20230329085239|1
PID|||1502935519^^^SERMAS^SN~424392^^^HULP^PI||CABEZUELA SANZ^PEDRO^^^||20160627|M|||PASEO JULIA ÁLVAREZ^395 3 E^MADRID^MADRID^28909^SPAIN||555710791^PRN^^PEDRO.CABEZUELA@GMAIL.COM|||||||||||||||||N|
PV1||N
ZBT|A-

In our example, the patient Pedro Cabezuela Sanz has a blood type A-.

EMPI Production Configuration:

As we saw in the previous EMPI configuration article, we have a production configured by default that allows us to immediately operate with our EMPI.

In our example we are using the Business Service EnsLib.HL7.Service.FileService that will capture files with HL7 messages from the specified route. To adapt it to the new type of messages that we are going to receive, we will modify the Message Schema Category to refer to the new schema created. Let's take a look at one of the examples of the messages that we sent in the example of the previous article to identify the Business Components through which our messages pass and to be able to identify them:

Our message enters through EnsLib.HL7.Service.FileService and it is redirected to HS.Hub.Standalone.HL7.Operation, here we have the configuration of said Business Operation:

We notice that we have an attribute called HL7ToSDA3Class configured with the class HS.Hub.Standalone.HL7.HL7ToSDA3. Let's open the class and check what its functionality is.

 
HS.Hub.Standaline.HL7.HL7ToSDA3

This class belongs to the standard and we have no interest in modifying anything in it, so we are going to create a new Local that will extend the standard class:

Class Local.Hub.Standalone.HL7.HL7ToSDA3 Extends HS.Hub.Standalone.HL7.HL7ToSDA3
{

ClassMethod GetTransformClass(msgType As %String, ByRef pTransformClass As %String) As %Status
{
	set tSC=$$$OK
	set pTransformClass=..OnGetTransformClass() 
	quit:pTransformClass'=""
	set pTransformClass = $CASE(msgType,
              "ADT_A01":"ADTA01ToSDA3", "ADT_A02":"ADTA02ToSDA3", "ADT_A03":"ADTA03ToSDA3",
              "ADT_A04":"ADTA01ToSDA3", "ADT_A05":"ADTA05ToSDA3", "ADT_A06":"ADTA01ToSDA3",
              "ADT_A07":"ADTA01ToSDA3", "ADT_A08":"ADTA01ToSDA3", "ADT_A09":"ADTA01ToSDA3",
              "ADT_A10":"ADTA01ToSDA3", "ADT_A11":"ADTA09ToSDA3", "ADT_A12":"ADTA01ToSDA3",
              "ADT_A13":"ADTA01ToSDA3", "ADT_A16":"ADTA01ToSDA3", "ADT_A17":"ADTA01ToSDA3",
              "ADT_A18":"ADTA18ToSDA3", "ADT_A23":"ADTA21ToSDA3", "ADT_A25":"ADTA01ToSDA3",
              "ADT_A27":"ADTA01ToSDA3", "ADT_A28":"ADTA28ToSDA3", "ADT_A29":"ADTA21ToSDA3",
              "ADT_A30":"ADTA30ToSDA3", "ADT_A31":"ADTA05ToSDA3", "ADT_A34":"ADTA30ToSDA3",
              "ADT_A36":"ADTA30ToSDA3", "ADT_A39":"ADTA40ToSDA3", "ADT_A40":"ADTA40ToSDA3",
              "ADT_A45":"ADTA45ToSDA3", "ADT_A47":"ADTA30ToSDA3", "ADT_A50":"ADTA50ToSDA3",
              "ADT_A60":"ADTA60ToSDA3", "BAR_P12":"BARP12ToSDA3", "MDM_T02":"MDMT02ToSDA3",
              "MDM_T04":"MDMT02ToSDA3", "MDM_T08":"MDMT02ToSDA3", "MDM_T11":"MDMT01ToSDA3",
              "OMP_O09":"OMPO09ToSDA3", "ORM_O01":"ORMO01ToSDA3", "ORU_R01":"ORUR01ToSDA3",
              "PPR_PC1":"PPRPC1ToSDA3", "PPR_PC2":"PPRPC1ToSDA3", "PPR_PC3":"PPRPC1ToSDA3",
              "RDE_O11":"RDEO11ToSDA3", "SIU_S12":"SIUS12ToSDA3", "SIU_S13":"SIUS12ToSDA3",
              "SIU_S14":"SIUS12ToSDA3", "SIU_S15":"SIUS12ToSDA3", "SIU_S16":"SIUS12ToSDA3",
              "SIU_S17":"SIUS12ToSDA3", "SIU_S26":"SIUS12ToSDA3", "VXU_V04":"VXUV04ToSDA3",
              :"Unsupported HL7 Message Type")

	set:pTransformClass="Unsupported HL7 Message Type" tSC = $$$HSError($$$HSErrUnsupportedHL7MessageType,msgType)
 	if $$$ISERR(tSC) quit tSC
 	if (msgType = "ADT_A28") {
 		set tTransformPackage = "Local.Hub.Standalone.HL7.DTL"
 	}
 	else {
 		set tTransformPackage = "HS.Hub.Standalone.HL7.DTL"
 	}
 	set pTransformClass = tTransformPackage_"."_pTransformClass
	
	quit tSC
}

}

Of all its classes, the one that interests us the most is GetTransformClass, which will indicate which transformation we must carry out depending on the type of message received. As in our case we have chosen the A28 message as the carrier of the new segment with the blood group information, we must find said transformation within the $CASE command:

"ADT_A28":"ADTA05ToSDA3"

Here we have our initial transformation, as we can see, the structure of the ADT_A28 message is that of ADT_A05, therefore it has said transformation configured by default, in our case we are going to create a new transformation copying the ADTA05ToSDA3 and we will call it ADTA28ToSDA3, replacing the value that had in the $CASE a:

"ADT_A28":"ADTA28ToSDA3"

As you can see from Local.Hub.Standalone.HL7.HL7ToSDA3 we have included an if to apply the proper transformation depending on the type of HL7 message received if the message is an ADT_A28 the package to be use is Local.Hub.Standalone.HL7.DTL.

With the class Local.Hub.Standalone.HL7.HL7ToSDA3 compiled we will open our production again and define the value of the HL7ToSDA3Class parameter in the Business Operation HS.Hub.Standalone.HL7.Operation with the newly implemented class Local.Hub.Standalone.HL7. HL7ToSDA3

Perfect, we have finished the configuration of our production. We only have one small detail left...we haven't created our ADTA28ToSDA3 transformation yet!

Creating custom transformation:

From the list of transformations menu we will look for the ADTA05ToSDA3.

Here we have it, let's open it and proceed to save it with another name to create a copy of it.

With the new class created we can start the transformation, we must perform the following tasks:

  • Change the Source Doc Type from 2.5.1:ADT_A05 to HealthShare_2.5:ADT_A28. Remember that we have created a new message structure A28.
  • Assign the value of the BloodType field of the ZBT segment of our message to the BloodType field that we find inside the Extension property included in Patient. This BloodType field is available by adding the BloodType property to the HS.Local.SDA3.PatientExtension class.

The transformation would be like this:

Alright, let's review what we've done so far:

  • Creating the BloodType field in the HS.Local.SDA3.PatientExtension class to store the custom information.
  • Extension of HL7 2.5.1 schema to add a custom Z segment to A28 messages.
  • Extension of the HS.Hub.Standalone.HL7.HL7ToSDA3 class to indicate the transformation to perform when receiving the A28.
  • Creating the ADTA28ToSDA3 transformation.

With all of the above we can now send messages with the blood group in the new segment and store it with the patient's data as an extension. What would we need to complete the customization? The Local.CompositeRecord.Transient.Definition! We are saving the data correctly but we have not modified the class used to load blood group in the relevant patient data.

CompositeRecord Setup

Let's see how the data is displayed in the CompositeRecord:

<propertyGroup name="Gender" trustBlock="base" description="Gender - base: all tier 5">
<property>Gender</property>
</propertyGroup>

 We define the group to which the new data will belong and its ownership in the patient object. In our case the BloodType property is hanging from the Extension property, so our new pool will be like this:

<propertyGroup name="Extension" trustBlock="base" description="Extension - base: all tier 5">
<property name='Blood Type'>Extension.BloodType</property>
</propertyGroup>

In our case we have called it Extension, but it could be any other name.

Once our Local.CompositeRecord.Transient.Definition class has been edited, we can launch tests with our custom A28 messages starting production.

Testing

Let's check the trace of one of the messages entered:

There we have our ZBT segment with the value B+. Let's see how it is mapped to the object:

There we have our Extension field with the new BloodType property. Let's launch a search for that patient registered with MPIID 100000350.

Here is the Extension field and its B+ value. We could rename the tag and call it Blood Type without any problem.

Well, this would be all, we have configured our EMPI to receive personalized fields, to register them as patient data and to return them.

If you have any questions or suggestions, do not hesitate to write a comment about it.

讨论 (0)1
登录或注册以继续
文章
· 六月 9, 2023 阅读大约需 16 分钟

Instalación y adaptación de EMPI en modo Standalone - Personalización de datos de paciente

Empezaré como dice la leyenda que empezó su clase Fray Luís de León tras varios años de condena:

Como decíamos ayer...nuestro EMPI puede recibir datos de múltiples fuentes, vía REST, mensajería HL7, etc. Pero es posible que los campos estándar no sean suficientes y querramos ampliar la información del paciente para ayudar a discriminarlo e identificarlo unívocamente. ¿Cómo podríamos personalizar los datos de paciente? ¿Modificando las clases estándar a nuestro gusto? ¡¡¡¡NOOOOO!!!! bueno, un poco sí, pero no a lo loco, ya que si tocamos clases estándar sin cuidado podremos encontrarnos que en una futura actualización perdamos todas estas modificaciones.

Personalización de los datos de paciente

Muy bien, continuemos con la configuración de nuestro EMPI en el que hemos introducido una serie de pacientes. Imaginad que hemos decidido incluir el grupo sanguíneo como un criterio para ayudar en el proceso del cálculo de pesos. Recordemos que clases definían los datos del paciente.

Por un lado tenemos el objeto que vamos a persistir y que lo encontramos en el Definition Designer:

Por el otro lado tenemos el objeto que vamos a retornar en las búsquedas y que corresponde al "Composite Record" configurado previamente desde la opción de Configuration Registry.

 

Echemos un ojo a HSPI.Data.Patient, en concreto a la declaración de la clase:

Class HSPI.Data.Patient Extends (%Persistent, HSPI.Data.Type.Patient, %MPRL.Linkage.Base) 

Podemos observar que sus propiedades son heredadas de HSPI.Data.Type.Patient, si accedemos a dicha clase veremos que disponemos de una serie de valores correspondiente a las propiedades del objeto paciente. Como hemos dicho, podríamos añadir aquí los campos que necesitáramos...pero eso NO ES RECOMENDABLE, tenemos una opción mejor que no se verá afectada por las actualizaciones de versiones. Si vamos hasta el final de la clase HSPI.Data.Type.Patient nos encontramos la siguiente propiedad:

Property Extension As HS.Local.SDA3.PatientExtension;

Esta propiedad es la que nos permitirá añadir tantos campos como deseemos a nuestro objeto paciente. Como véis, al pertenecer al Local evitará que en los futuros cambios de versión esta información se pierda. Abramos HS.Local.SDA3.PatientExtension e incluyamos la propiedad BloodType como un simple String:

Class HS.Local.SDA3.PatientExtension Extends HS.SDA3.DataType
{

Parameter HSDEPLOY = 0;
Parameter STREAMLETCLASS = "HS.SDA3.Streamlet.Patient";
Property BloodType As %String;
}

Perfecto, nuestro objeto está preparado para recibir dicha información. ¿Cómo podemos enviar esta nueva información a nuestro sistema? Como vimos en el artículo anterior, nuestro EMPI dispone de una configuración por defecto que puede recibir mensajería HL7 y que ya hemos utilizado para añadir pacientes.

Creación de nuevo esquema de HL7

En nuestro ejemplo usaremos un mensaje A28 al que añadiremos un nuevo segmento con la información del tipo sanguineo. Para ello vamos a crear un nuevo esquema basado en el esquema estándar 2.5.1 y vamos a crear un nuevo DocType para el tipo de mensaje A28.

Aquí tenemos nuestro nuevo esquema HealthShare_2.5 que como véis, extiende del estándar 2.5.1 y hemos creado una nueva estructura de mensaje del tipo A28. Para añadir un nuevo segmento crearemos primeramente dicho segmento al que llamaremos ZBTy le añadiremos el campo que contendrá el tipo sanguíneo:

Con el segmento creado sólo necesitamos incluirlo a la estructura del mensaje y podremos empezar a trabajar con estos mensajes personalizados:

Perfecto, mensaje configurado con nuestro segmento ZBT. Veamos ahora un ejemplo del tipo de mensaje que vamos a recibir con el nuevo segmento:

MSH|^~\&|HIS|HULP|EMPI||20230329085239||ADT^A28|72104|P|HealthShare_2.5
EVN|A28|20230329085239|20230329085239|1
PID|||1502935519^^^SERMAS^SN~424392^^^HULP^PI||CABEZUELA SANZ^PEDRO^^^||20160627|M|||PASEO JULIA ÁLVAREZ^395 3 E^MADRID^MADRID^28909^SPAIN||555710791^PRN^^PEDRO.CABEZUELA@GMAIL.COM|||||||||||||||||N|
PV1||N
ZBT|A-

En nuestro ejemplo el paciente Pedro Cabezuela Sanz tiene un tipo sanguíneo A-.

Configuración de la producción del EMPI:

Como vimos en el anterior artículo de configuración del EMPI, disponemos de una producción configurada por defecto que nos permite operar inmediatamente con nuestro EMPI. 

En nuestro ejemplo estamos usando el Business Service EnsLib.HL7.Service.FileService que capturará ficheros con mensajes HL7 de la ruta especificada, para adaptarlo al nuevo tipo de mensajes que vamos a recibir modificaremos el Message Schema Category para hacer referencia al nuevo esquema creado. Echemos un ojo a uno de los ejemplos de los mensajes que enviamos en el ejemplo del artículo anterior para identificar los Business Components por los que pasan nuestros mensaje y poder identificarlos:

Nuestro mensaje entra por EnsLib.HL7.Service.FileService y este se redirige a HS.Hub.Standalone.HL7.Operation, aquí tenemos la configuración de dicho Business Operation:

Observamos que tenemos un atributo llamado HL7ToSDA3Class configurado con la clase HS.Hub.Standalone.HL7.HL7ToSDA3. Abramos la clase y comprobemos cual es su funcionalidad.

 
HS.Hub.Standaline.HL7.HL7ToSDA3

Esta clase pertenece al estándar y no tenemos interés en modificar nada de la misma, por lo que vamos a crear una nueva Local que extenderá de la clase estándar:

Class Local.Hub.Standalone.HL7.HL7ToSDA3 Extends HS.Hub.Standalone.HL7.HL7ToSDA3
{

ClassMethod GetTransformClass(msgType As %String, ByRef pTransformClass As %String) As %Status
{
    {
	set tSC=$$$OK
	set pTransformClass=..OnGetTransformClass() 
	quit:pTransformClass'=""
	set pTransformClass = $CASE(msgType,
              "ADT_A01":"ADTA01ToSDA3", "ADT_A02":"ADTA02ToSDA3", "ADT_A03":"ADTA03ToSDA3",
              "ADT_A04":"ADTA01ToSDA3", "ADT_A05":"ADTA05ToSDA3", "ADT_A06":"ADTA01ToSDA3",
              "ADT_A07":"ADTA01ToSDA3", "ADT_A08":"ADTA01ToSDA3", "ADT_A09":"ADTA01ToSDA3",
              "ADT_A10":"ADTA01ToSDA3", "ADT_A11":"ADTA09ToSDA3", "ADT_A12":"ADTA01ToSDA3",
              "ADT_A13":"ADTA01ToSDA3", "ADT_A16":"ADTA01ToSDA3", "ADT_A17":"ADTA01ToSDA3",
              "ADT_A18":"ADTA18ToSDA3", "ADT_A23":"ADTA21ToSDA3", "ADT_A25":"ADTA01ToSDA3",
              "ADT_A27":"ADTA01ToSDA3", "ADT_A28":"ADTA28ToSDA3", "ADT_A29":"ADTA21ToSDA3",
              "ADT_A30":"ADTA30ToSDA3", "ADT_A31":"ADTA05ToSDA3", "ADT_A34":"ADTA30ToSDA3",
              "ADT_A36":"ADTA30ToSDA3", "ADT_A39":"ADTA40ToSDA3", "ADT_A40":"ADTA40ToSDA3",
              "ADT_A45":"ADTA45ToSDA3", "ADT_A47":"ADTA30ToSDA3", "ADT_A50":"ADTA50ToSDA3",
              "ADT_A60":"ADTA60ToSDA3", "BAR_P12":"BARP12ToSDA3", "MDM_T02":"MDMT02ToSDA3",
              "MDM_T04":"MDMT02ToSDA3", "MDM_T08":"MDMT02ToSDA3", "MDM_T11":"MDMT01ToSDA3",
              "OMP_O09":"OMPO09ToSDA3", "ORM_O01":"ORMO01ToSDA3", "ORU_R01":"ORUR01ToSDA3",
              "PPR_PC1":"PPRPC1ToSDA3", "PPR_PC2":"PPRPC1ToSDA3", "PPR_PC3":"PPRPC1ToSDA3",
              "RDE_O11":"RDEO11ToSDA3", "SIU_S12":"SIUS12ToSDA3", "SIU_S13":"SIUS12ToSDA3",
              "SIU_S14":"SIUS12ToSDA3", "SIU_S15":"SIUS12ToSDA3", "SIU_S16":"SIUS12ToSDA3",
              "SIU_S17":"SIUS12ToSDA3", "SIU_S26":"SIUS12ToSDA3", "VXU_V04":"VXUV04ToSDA3",
              :"Unsupported HL7 Message Type")

	set:pTransformClass="Unsupported HL7 Message Type" tSC = $$$HSError($$$HSErrUnsupportedHL7MessageType,msgType)
 	if $$$ISERR(tSC) quit tSC
 	if (msgType = "ADT_A28") {
 		set tTransformPackage = "Local.Hub.Standalone.HL7.DTL"
 	}
 	else {
 		set tTransformPackage = "HS.Hub.Standalone.HL7.DTL"
 	}
 	set pTransformClass = tTransformPackage_"."_pTransformClass
	
	quit tSC
}

}

De todas sus clases la que más nos interesa es GetTransformClass la cual nos va a indicar que transformación debemos realizar dependiendo del tipo de mensaje recibido. Como en nuestro caso hemos escogido el mensaje A28 como el portador del nuevo segmento con la información del grupo sanguíneo deberemos encontrar dicha transformación dentro del comando $CASE:

"ADT_A28":"ADTA05ToSDA3"

Aquí tenemos nuestra transformación inicial, como podemos ver, la estructura del mensaje ADT_A28 es la de ADT_A05, por ello tiene dicha transformación configurada por defecto, en nuestro caso vamos a crear una nueva transformación copiando la ADTA05ToSDA3 y la denominaremos ADTA28ToSDA3, remplazando el valor que tenía en el $CASE a:

"ADT_A28":"ADTA28ToSDA3"

Si véis hemos incluido una condición para que cuando se reciba un ADT_A28 está utilice la transformación con el package Local.Hub.Standalone.HL7.DTL

Con la clase  Local.Hub.Standalone.HL7.HL7ToSDA3 compilada abreremos nuevamente nuestra producción y definiremos el valor del parámetro HL7ToSDA3Class en el Business Operation HS.Hub.Standalone.HL7.Operation con la nueva clase implementada Local.Hub.Standalone.HL7.HL7ToSDA3

Perfecto, hemos concluido la configuración de nuestra producción. Sólo nos queda un pequeño detalle...¡aún no hemos creado nuestra transformación ADTA28ToSDA3!

Creación de transformación personalizada:

Desde el menú de las lista de transformaciones buscaremos la ADTA05ToSDA3.

Aquí la tenemos, abrámosla y procedamos a guardarla con otro nombre y Package para crear una copia de la misma.

Con la nueva clase creada podemos empezar la transformación, deberemos realizar las siguientes tareas:

  • Modificar el Source Doc Type de 2.5.1:ADT_A05 a HealthShare_2.5:ADT_A28. Recordar que hemos creado una nueva estructura de mensaje A28.
  • Asignar el valor del campo BloodType del segmento ZBT de nuestro mensaje al campo BloodType que encontramos dentro de la propiedad Extension incluida en Patient. Este campo BloodType está disponible al haber añadido la propiedad BloodType en la clase HS.Local.SDA3.PatientExtension.

La transformación quedaría tal que así:

Muy bien, repasemos lo que hemos hecho hasta ahora:

  • Creación del campo BloodType en la clase HS.Local.SDA3.PatientExtension para almacenar la información personalizada.
  • Extensión del esquema 2.5.1 de HL7 para añadir un segmento Z personalizado a los mensajes A28.
  • Extensión de la clase HS.Hub.Standalone.HL7.HL7ToSDA3 para indicar la transformación a realizar al recibir el A28.
  • Creación de la transformación ADTA28ToSDA3.

Con todo lo anterior ya podemos enviar mensajes con el grupo sanguíneo en el nuevo segmento y almacenarlo con los datos del paciente como una extensión. ¿Qué nos faltaría para concluir la personalización? ¡El Local.CompositeRecord.Transient.Definition! Estamos guardando los datos correctamente pero no hemos modificado la clase usada para cargar grupo sanguíneo en los datos relevantes del paciente.

Configuración del CompositeRecord

Veamos como se muestran los datos en el CompositeRecord:

<propertyGroup name="Gender" trustBlock="base" description="Gender - base: all tier 5">
<property>Gender</property>
</propertyGroup>

 Definimos el grupo al que van a pertenecer los nuevos datos y la propiedad del mismo en el objeto paciente. En nuestro caso la propiedad BloodType está colgando de la propiedad Extension, por lo que nuestro nuevo grupo será tal que así:

<propertyGroup name="Extension" trustBlock="base" description="Extension - base: all tier 5">
<property name='Blood Type'>Extension.BloodType</property>
</propertyGroup>

En nuestro caso lo hemos denominado Extension, pero podría ser cualquier otro nombre.

Una vez editado nuestra clase Local.CompositeRecord.Transient.Definition ya podemos lanzar una batería de pruebas con nuestros mensajes A28 personalizados arrancando la producción.

Pruebas:

Comprobemos la traza de uno de los mensajes introducidos:

Ahí tenemos nuestro segmento ZBT con el valor B+. Veamos como se mapea al objeto:

Ahí tenemos nuestro campo Extension con la nueva propiedad BloodType. Lancemos una búsqueda de dicho paciente registrado con el MPIID 100000350.

Aquí está el campo Extension y su valor B+. Podríamos cambiar el nombre de la etiqueta y llamarlo Blood Type sin ningún problema.

Pues esto sería todo, hemos configurado nuestro EMPI para recibir campos personalizados, para registrarlos como datos del paciente y para devolverlos.

Si tenéis alguna pregunta o sugerencia no dudeis en escribir un comentario al respecto.

讨论 (0)1
登录或注册以继续
文章
· 六月 8, 2023 阅读大约需 2 分钟

Making your own Chat with Open AI ChatGPT in Telegram Using InterSystems Interoperability

Hi Community!

Just want to share with you an exercise I made to create "my own" chat with GPT in Telegram.

It became possible because of two components on Open Exchange: Telegram Adapter by @Nikolay Solovyev and IRIS Open-AI by  @Kurro Lopez 

So with this example you can setup your own chat with ChatGPT in Telegram. 

Let's see how to make it work!

Prerequisites

Create a bot using @BotFather account and get the Bot Token. Then add bot into a telegram chat or channel and give it admin rights. Learn more at https://core.telegram.org/bots/api

Open (create if you don't have it) an account on https://platform.openai.com/ and get your Open AI API Key and Organization id.

Make sure you have IPM installed in your InterSystems IRIS. if not here is one liner to install:

USER>    s r=##class(%Net.HttpRequest).%New(),r.Server="pm.community.intersystems.com",r.SSLConfiguration="ISC.FeatureTracker.SSL.Config" d r.Get("/packages/zpm/latest/installer"),$system.OBJ.LoadStream(r.HttpResponse.Data,"c")

Or you can use community docker image with IPM onboard like this:

$ docker run --rm --name iris-demo -d -p 9092:52797 -e IRIS_USERNAME=demo -e IRIS_PASSWORD=demo intersystemsdc/iris-community:latest

$ docker exec -it iris-demo iris session iris -U USER

USER>

Installation

Install the IPM package in a namespace with Interoperability enabled.

USER>zpm "install telegram-gpt"

Usage

Open the production.

Put your bot's Telegram Token into Telegram business service and Telegram Business operation both:

Also initialize St.OpenAi.BO.Api.Connect operation with your Chat GPT API key and Organization id:

Start the production.

Ask any question in the telegram chat. You'll get an answer via Chat GPT. Enjoy!

And in visual trace:

Details

This example uses 3.5 version of Chat GPT Open AI. It could be altered in the data-transformation rule for the Model parameter.

8 Comments
讨论 (8)1
登录或注册以继续
文章
· 六月 7, 2023 阅读大约需 15 分钟

DeDupe an InterSystems® FHIR® Server With FHIR SQL Builder and Zingg

This post backs the demonstration at Global Summit 2023 "Demos and Drinks" with details most likely lost in the noise of the event.

This is a demonstration on how to use the FHIR SQL Capabilities of InterSystems FHIR Server along side the Super Awesome Identity and Resolution Solution, Zingg.ai to detect duplicate records in your FHIR repository, and the basic idea behind remediation of those resources with the under construction PID^TOO|| currently enrolled in the InterSystems Incubator program.  If you are into the "Compostable CDP" movement and want to master your FHIR Repository in place you may be in the right spot.


Demo

FHIR SQL Builder

This is an easy 3 step process for FHIR SQL.

  • Set up an Analysis
  • Set up a Transform
  • Set up a Projection

Zingg.ai

The documentation for Zingg is off the chain exhaustive and though extensible and scalable beyond this simple demo, here are the basics.

  • Find
  • Label
  • Train
  • Match
  • Link
 

Let's get on with it as they say...

FHIR SQL

We were awarded a full trial of the Health Connect Cloud suite for the duration of the incubator, included in that was the FHIR Server with FHIR SQL Enabled.

The FHIR SQL Builder (abbreviated “builder”) is a sophisticated projection tool to help developers create custom SQL schemas using data in their FHIR (Fast Healthcare Interoperability Resources) repository without moving the data to a separate SQL repository. The objective of the builder is to enable data analysts and business intelligence developers to work with FHIR using familiar analytic tools, such as ANSI SQL, Power BI, or Tableau, without having to learn a new query syntax. 

I know right? Super great, now our "builder" is going to go to work projecting the fields weed need to identify the duplicate the records with Zingg.

First step is the analysis, which you can hardly go wrong clicking this one into submission, simply point the analysis to "localhost" which is essentially the InterSystems FHIR Server underneath.

 The transforms are the critical piece to get right, you need to build these fields out to flatten the FHIR resources so they can be read to make some decisions over the SQL super server.  These most likely will need some thought as the more fields you transform to sql, the better your machine learning model will end up when prospecting for duplicates.

 Now, a data steward type should typically be building these, but I whooped up a few for the demo with questionable simplicity, this was done by using the "clown suit" by hitting the pencil to generate them, but considering the sophistication that may go into them, you can import and export them as well.  Pay special attention to the "Package" and "Name" as this will be the source table for your sql connection.

 This transform is essentially saying we want to use name and gender to detect duplicates.

 
Patient Transform example

Now, the last part is essentially scheduling the job to project the data to a target schema.  I think you will be presently surprised that as data is added to the FHIR Server, the projections fill automatically, phew.

Now to seal the deal with the setup of FHIR SQL, you can see the projection for Patient (PIDTOO.Patient) being visible, then you create another table to store the output from the dedupe run (PIDTOO.PatientDups).
 

Another step you will need to complete is enabling the Firewall so that external connections are enabled for your deployment, and you have allowed access for the source CIDR block connecting to the super server.

 

Mentally bookmark the overview page, as it has the connectivity information and credentials needed to connect before moving to the next step.

 
 
DeDupe with Zingg

Zingg is super powerful, OSS, runs on Spark and scales to your wallet when it comes to de-duplication of datasets large and small.  I don't want to oversimplify the task, but the reality is the documentation, functional container are enough to get up and running very quickly.  We will keep this to the brass tacks though to minimally point out what needs to be completed to execute your first de-duplication job fired off against an IRIS Database.

Install

Clone the zingg.ai repo: https://github.com/zinggAI/zingg

We also need the JDBC driver for Zingg to connect with IRIS. Download the IRIS JDBC driver and add the path of the driver to spark.jars property of zingg.conf... organize this jar in the `thirdParty/lib` directory.

spark.jars=/home/sween/Desktop/PIDTOO/api/zingg/thirdParty/lib/intersystems-jdbc-3.7.1.jar

  

Match Criteria

This step takes some thought and can be fun if you are into this stuff or enjoy listening to black box recordings of plane crashes on YouTube.  To demonstrate things, recall the fields we projected from "builder" to establish the match criteria.  All of this is done in our python implementation that declares the PySpark job, which will be revealed in its entirety down a ways.

# FHIRSQL Source Object FIELDDEFS
# Pro Tip!
# These Fields are included in FHIRSQL Projections, but not specified in the Transform
fhirkey = FieldDefinition("Key", "string", MatchType.DONT_USE)
dbid = FieldDefinition("ID", "string", MatchType.DONT_USE)
# Actual Fields from the Projection
srcid = FieldDefinition("IdentifierValue", "string", MatchType.DONT_USE)
given = FieldDefinition("NameFamily", "string", MatchType.FUZZY)
family = FieldDefinition("NameGiven", "string", MatchType.FUZZY)
zip = FieldDefinition("AddressPostalCode", "string", MatchType.ONLY_ALPHABETS_FUZZY)
gender = FieldDefinition("Gender", "string", MatchType.FUZZY)

fieldDefs = [fhirkey, dbid, srcid, given, family,zip, gender]

So the fields match the attributes in our IRIS project and the MatchTypes we set for each field type.  You'll be delighted with what is available as you can immediately put them to good use with clear understanding.

Three common ones are here:

  • FUZZY: Generalized matching with strings and stuff
  • EXACT: No variations allowed, deterministic value, guards against domain conflicts sorta.
  • DONT_USE: Fields that have nothing to do with the matching, but needed in the remediation or understanding in the output.

Some other favorites of mine are here, as they seem to work on dirty data a little bit better and make sense of multiple emails.

  • EMAIL: Hacks off the domain name and the @, and uses the string.
  • TEXT: Things between two strings
  • ONLY_ALPHABETS_FUZZY: Omits integers and non-alphas where they clearly do not belong for match consideration

The full list is available here for the curious.

Model

Create a folder to build your model... this one follows the standard in the repo, create folder `models/700`.

# Object MODEL
args = Arguments()
args.setFieldDefinition(fieldDefs)
args.setModelId("700")
args.setZinggDir("/home/sween/Desktop/PIDTOO/api/zingg/models")
args.setNumPartitions(4)
args.setLabelDataSampleSize(0.5)

Input

These values are represented in what we setup in the previous steps on "builder"

# "builder" Projected Object FIELDDEFS
InterSystemsFHIRSQL = Pipe("InterSystemsFHIRSQL", "jdbc")
InterSystemsFHIRSQL.addProperty("url","jdbc:IRIS://3.131.15.187:1972/FHIRDB")
InterSystemsFHIRSQL.addProperty("dbtable", "PIDTOO.Patient")
InterSystemsFHIRSQL.addProperty("driver", "com.intersystems.jdbc.IRISDriver")
InterSystemsFHIRSQL.addProperty("user","fhirsql")
# Use the same password that is on your luggage
InterSystemsFHIRSQL.addProperty("password","1234")
args.setData(InterSystemsFHIRSQL)

Output

Now this table is not a projected table by "builder", it is an empty table we created to house the results from Zingg.

# Zingg's Destination Object on IRIS
InterSystemsIRIS = Pipe("InterSystemsIRIS", "jdbc")
InterSystemsIRIS.addProperty("url","jdbc:IRIS://3.131.15.187:1972/FHIRDB")
InterSystemsIRIS.addProperty("dbtable", "PIDTOO.PatientDups")
InterSystemsIRIS.addProperty("driver", "com.intersystems.jdbc.IRISDriver")
InterSystemsIRIS.addProperty("user","fhirsql")
# Please use the same password as your luggage
InterSystemsIRIS.addProperty("password","1234")

args.setOutput(InterSystemsIRIS)

If you are trying to understand the flow here, hopefully this will clarify things.

1. Zingg reads the projected data from builder (PIDTOO.Patient)
2. We do some "ML Shampoo" against the data.
3. Then we write the results back to builder (PIDTOO.PatientDups)

Thanks to @Sergei Shutov for the icons!

 

ML Shampoo

Now, Zingg is a supervised machine learning implementation, so you are going to have to train it up front, and at an interval to keep the model smart.  Its the "rinse and repeat" part of the analogy if you havent gotten the shampoo reference from above.

  • Find - Go get some data
  • Label - Prompt the human to help us out
  • Train - Once we have enough labelled data
  • Match - Zingg writes out the results
  • Link
bash scripts/zingg.sh --properties-file config/zingg-iris.conf --run pidtoo-iris/FHIRPatient-IRIS.py findTrainingData
bash scripts/zingg.sh --properties-file config/zingg-iris.conf --run pidtoo-iris/FHIRPatient-IRIS.py label
bash scripts/zingg.sh --properties-file config/zingg-iris.conf --run pidtoo-iris/FHIRPatient-IRIS.py train

For the find, you will get something a little bit like the below if things are working correctly.

 
findTrainingData

Now, we train the Cylon with supervised learning, lets give it a go.

 
Label

Now, do what the cylon says, and do this a lot, maybe during meetings or on the Red Line on your way or heading home from work (Get it?  Train).  You'll need enough labels for the train phase, where Zingg goes to town and works its magic finding duplicates.

bash scripts/zingg.sh --properties-file config/zingg-iris.conf --run pidtoo-iris/FHIRPatient-IRIS.py train

Ok, here we go, lets get our results:

bash scripts/zingg.sh --properties-file config/zingg-iris.conf --run pidtoo-iris/FHIRPatient-IRIS.py match

We now have some results back in the PIDTOO.PatientDups table that gets us to the point of things.  We are going to use @Dmitry Maslennikov 's sqlalchemy sorcery to connect via the notebook and inspect our results.

from sqlalchemy import create_engine
# FHIRSQL Builder Cloud Instance
engine = create_engine("iris://fhirsql:1234@3.131.15.187:1972/FHIRDB")
conn = engine.connect()

query = '''
SELECT
    TOP 20 z_cluster, z_maxScore, z_minScore, NameGiven, NameFamily, COUNT(*)
FROM
    PIDTOO.PatientDups
GROUP BY
    z_cluster
HAVING 
    COUNT(*) > 1
'''
result = conn.exec_driver_sql(query)
print(result)

It takes a little bit to interpret the results, but, here is the result of the brief training on loading the NC voters data into FHIR.

 
 loadncvoters2fhir.py ©

The output Zingg gave us is pretty great for the minimal effort I put in training things in between gas lighting.

z_cluster is the id Zingg assigns to the duplicates, I call it the "dupeid", just understand that is the identifier of the you want to query to examine the potential duplicates... Im accustomed to trusting a minScore of 0.00 and anything over 0.90 for a score for examination.

(189, 0.4677305247393828, 0.4677305247393828, 'latonya', 'beatty', 2)
(316, 0.8877195988867068, 0.7148998161578, 'wiloiam', 'adams', 5)
(321, 0.5646965557084127, 0.0, 'mar9aret', 'bridges', 3)
(326, 0.5707960437038071, 0.0, 'donnm', 'johnson', 6)
(328, 0.982044685998597, 0.40717509762282955, 'christina', 'davis', 4)
(333, 0.8879795543643093, 0.8879795543643093, 'tiffany', 'stamprr', 2)
(334, 0.808243240184001, 0.0, 'amanta', 'hall', 4)
(343, 0.6544295790716498, 0.0, 'margared', 'casey', 3)
(355, 0.7028336885619522, 0.7028336885619522, 'dammie', 'locklear', 2)
(357, 0.509141927875999, 0.509141927875999, 'albert', 'hardisfon', 2)
(362, 0.5054569794103886, 0.0, 'zarah', 'hll', 6)
(366, 0.4864567456390275, 0.4238040425261962, 'cara', 'matthews', 4)
(367, 0.5210329255531461, 0.5210329255531461, 'william', 'metcaif', 2)
(368, 0.6431091575056218, 0.6431091575056218, 'charles', 'sbarpe', 2)
(385, 0.5338624802449684, 0.0, 'marc', 'moodt', 3)
(393, 0.5640435106505274, 0.5640435106505274, 'marla', 'millrr', 2)
(403, 0.4687497402769476, 0.0, 'donsna', 'barnes', 3)
(407, 0.5801171648347092, 0.0, 'veronicc', 'collins', 35)
(410, 0.9543673811569922, 0.0, 'ann', 'mason', 7)
(414, 0.5355771790403805, 0.5355771790403805, 'serry', 'mccaray', 2)

Let's pick the "dupeid" 410 and see how we did, the results seem to think there are 7 duplicates.

 Ok, so there are the 7 records, with variable scores... Lets dial it in a little bit more and only report back a score of higher than .90.

 Wooo!

So now, if you recall, we have the `MatchType.DONT_USE` for `Key` in our match criteria showing up in our output, but you know what?

USE IT!

These are the FHIR patient resource ids in the FHIR repository we have identified as duplicates and require remediation.
🔥

讨论 (0)1
登录或注册以继续
问题
· 六月 7, 2023

how can i use git for versioning while working in IRIS

how can i use git for versioning while working in IRIS. 
I am having directory that i created while working in iris. How can i manage version control using git in this case.

Is it possible with vscode terminal or with a normal terminal in mac.

10 Comments
讨论 (10)3
登录或注册以继续