发布新帖

検索

文章
· 四月 25, 2023 阅读大约需 12 分钟

Configuring Mirror in Docker

A common need for our customers is to configure both HealthShare HealthConnect and IRIS in high availability mode.

It's common for other integration engines on the market to be advertised as having "high availability" configurations, but that's not really true. In general, these solutions work with external databases and therefore, if these are not configured in high availability, when a database crash occurs or the connection to it is lost, the entire integration tool it becomes unusable.

In the case of InterSystems solutions, this problem does not exist, as the database is part and the core of the tools themselves. And how has InterSystems solved the problem of high availability? With abstruse configurations that could drag us into a spiral of alienation and madness? NO! At InterSystems we have listened and attended to your complaints (as we always try to do ;) ) and we have made the mirroring function available to all our users and developers.

Mirroring

How does the Mirror work? The concept itself is very simple. As you already know, both IRIS and HealthShare work with a journaling system that records all update operations on the databases of each instance. This journaling system is the one that later helps us to recover the instances without data loss after a crash. Well, these journal files are sent between the instances configured in mirror, allowing and keeping the instances configured in mirror permanently updated.

Architecture

Let's briefly explain what the architecture of a system configured in Mirror would look like:

  • Two instances configured in failover mode:
    • Active node – Receives all regular read/write operations.
    • Passive node: in reading mode, it synchronously receives any changes produced in the active node.
  • 0-14 asynchronous instances: as many asynchronous instances as you want to use, they can be of two types:
    • DR async (Disaster Recovery): nodes in reading mode that are not part of the Failover, although they can be promoted manually. If so, they could be automatically promoted to primary node in the event of the failure of the other two Failover nodes. The update of your data is in asynchronous mode, so its freshness is not guaranteed.
    • Reporting Asyncs: Nodes updated asynchronously for use in BI tasks or data mining. They cannot be promoted to failover since writes can be performed on the data.
  • ISCAgent: Installed on each server where an instance is located. It will be in charge of monitoring the status of the instances of said server. It is another way of communication between the Mirror servers in addition to direct communication.
  • Arbiter: it is an ISCAgent installed independently from the servers that make up the Mirror and allows increasing security and control of failovers within it by monitoring the ISCAgents installed and the IRIS/HealthShare instances. Its installation is not mandatory.

This would be the operation of a Mirror formed by a failover with only two nodes:

In an InterSystems IRIS mirror, when the primary becomes unavailable, the mirror fails over to the backup.

Previous warning

The project associated with this article does not have an active license that allows the configuration of the mirror. If you want to try it, send me an email directly or add a comment at the end of the article and I will contact you.

Deployment in Docker

For this article, we are going to set up a small project in Docker that allows us to set up 2 failover instances with an Arbiter. By default, the IRIS images available for Docker have the ISCAgent already installed and configured, so we can skip that step. It will be necessary to configure the project that is associated with the article from a Visual Studio Code, since it will allow us to work more comfortably with the server files later on.

Let's see what form our docker-compose.yml would have:

version: '3.3'
services:
  arbiter:
      container_name: arbiter
      hostname: arbiter
      image: containers.intersystems.com/intersystems/arbiter:2022.1.0.209.0
      init: true
      command:
        - /usr/local/etc/irissys/startISCAgent.sh 2188
  mirrorA:
    image: containers.intersystems.com/intersystems/iris:2022.1.0.209.0
    container_name: mirrorA
    depends_on:
      - arbiter
    ports:
    - "52775:52773"
    volumes:
    - ./sharedA:/shared
    - ./install:/install
    - ./management:/management
    command:
      --check-caps false
      --key /install/iris.key
      -a /install/installer.sh
    environment:
    - ISC_DATA_DIRECTORY=/shared/durable
    hostname: mirrorA
  mirrorB:
    image: containers.intersystems.com/intersystems/iris:2022.1.0.209.0
    container_name: mirrorB
    depends_on:
      - arbiter
      - mirrorA
    ports:
    - "52776:52773"
    volumes:
    - ./sharedB:/shared
    - ./install:/install
    - ./management:/management
    command:
      --check-caps false
      --key /install/iris.key
      -a /install/installer.sh
    environment:
    - ISC_DATA_DIRECTORY=/shared/durable
    hostname: mirrorB
YAML
YAML

We can see that we have defined 3 containers:

  • Arbiter: it corresponds to the ISCAgent (even though the image is called Arbiter) that will be deployed to control the IRIS instances that will form the Mirror Failover. When starting the container it will execute a shell file that will start the ISCAgent listening on port 2188 of the container.
  • mirrorA: container in which the IRIS v.2022.1.0.209 image will be deployed and which we will later configure as the primary Failover node.
  • mirrorB: container in which the IRIS v.2022.1.0.209 image will be deployed and which we will later configure as a secondary Failover node.

When we execute the docker-compose up -d command, the defined containers will be deployed in our Docker, and it should look like this in our Docker Desktop (if we do it from Windows).

Mirror configuration.

With our containers deployed we will proceed to access the instances that we are going to configure in mirror, the first will be found listening on port 52775 (mirrorA) and the second on 52776 (mirrorB). The access user and password will be superuser / SYS

Due to the fact that the instances are deployed in Docker, we will have two options to configure the IPs of our servers. The first is to directly use the name of our containers in the configuration (which is the easiest way) or check the IPs that Docker has assigned for each container (opening the console and executing an ifconfig that returns the assigned IP). For reasons of clarity we will use for the example the names that we have given to each container as the address of each one within Docker.

First we will configure the instance that we will use as the active node of the FailOver. In our case it will be what we have called mirrorA.

The first step will be to enable the mirroring service, so we will access the mirror menu from the management portal: System Administration --> Configuration --> Mirror Settings --> Enable Mirror Service and mark the Service Enabled check:

With the service enabled we can start configuring our active node. After enabling the service you will be able to see that new options have been enabled in the Mirror menu:

In this case, since we do not have any mirror configuration already created, we must create a new one with the Create Mirror option. When we access this option, the management portal will open a new window from which we can configure our mirror:

Let's take a closer look at each of the options:

  • Mirror Name: the name with which we will identify our mirror. For our example we will call it MIRRORSET
  • Require SSL/TLS: for our example we will not configure a connection using SSL/TLS, although in production environments it would be more than convenient to prevent the journal file from being shared without any type of encryption between the instances. If you are interested in configuring it, you have all the necessary information at the following URL of the documentation.
  • Use Arbiter: this option is not mandatory, but it is highly recommended, since it adds a layer of security to our mirror configuration. For our example we will leave it checked and we will indicate the IP in which we have our Arbiter running. For our example the IP will be in container name arbiter.
  • User Virtual IP: in Linux/Unix environments this option is very interesting since it allows us to configure a virtual IP for our active node that will be managed by our Mirror. This virtual IP must belong to the same subnet as the failover nodes. The operation of the virtual IP is very simple, in case of failure of the active node the mirror will automatically configure the virtual IP on the server where the passive node to be promoted is located. In this way, the promotion of the passive to active node will be completely transparent for the users, since they will continue to be connected to the same IP, even though it will be configured on a different server. If you want to know more about the virtual IP you can review this URL of the documentation.

The rest of the configuration can be left as it is. On the right side of the screen we will see the information related to this node in the mirror:

  • Mirror Member Name: name of this mirror member, by default it will take the name of the server along with the name of the instance.
  • Superserver Address: Superserver IP address of this node, in our case, mirrorA.
  • Agent Port: port in which the ISCAgent corresponding to this node has been configured. By default 2188.

Once the necessary fields are configured, we can proceed to save the mirror. We can check how the configuration has been from the mirror monitor (System Operation --> Mirror Monitor).

Perfect, here we have our newly configured mirror. As you can see, only the active node that we have just created appears. Very good, let's go then to add our passive node in the Failover. We access the mirrorB management portal and access the Mirror Settings menu. As we already did for the mirrorA instance, we must enable the Mirror service. We repeat the operation and as soon as the menu options are updated we will choose Join as Failover.

Here we have the mirror connection screen. Let's briefly explain what each of the fields means:

  • Mirror Name: name that we gave to the mirror at the time of creation, in our example MIRRORSET.
  • Agent Address on Other System: IP of the server where the ISCAgent of the active node is deployed, for us it will be mirrorA.
  • Agent Port: listening port of the ISCAgent of the server in which we created the mirror. By default 2188.
  • InterSystems IRIS Instance Name: the name of the IRIS instance on the active node. In this case it coincides with that of the passive node, IRIS.

After saving the mirror data we will have the option to define the information related to the passive node that we are configuring. Let's take a look again at the fields that we can configure of the passive node:

  • Mirror Member Name: name that the passive node will take in the mirror. By default formed by the name of the server and the instance.
  • Superserver Address: IP address of the superserver in our passive node. In this case mirrorB.
  • Agent Port: listening port of the ISCAgent installed on the passive node server that we are configuring. By default 2188.
  • SSL/TLS Requirement: not configurable in this example, we are not using SSL/TLS.
  • Mirror Private Address: IP address of the passive node. As we have seen, when using Docker we can use the container name mirrorB.
  • Agent Address: IP address to the server where the ISCAgent is installed. Same as before, mirrorB.

We save the configuration as we have indicated and we return to the mirror monitor to verify that we have everything correctly configured. We can visualize the monitor of both the active node in mirrorA and the passive one in mirrorB. Let's see the differences between both instances.

Mirror monitor on active node mirrorA:

Mirror monitor on passive node mirrorB:

As you can see the information shown is similar, basically changing the order of the failover members. The options are also different, let's see some of them:

  • Active node mirrorA:
    • Set No Failover: prevents the execution of the failover in the event of a stoppage of any of the instances that are part of it.
    • Demote other member: Removes the other failover member (in this case mirrorB) from the mirror configuration.
  • Passive node mirrorB:
    • Stop Mirror On This Member: Stops mirror synchronization on the failover passive node.
    • Demote To DR Member: demotes this node from being part of the failover with its real-time synchronization to Disaster Recovery mode in asynchronous mode.

Perfect, we already have our nodes configured, now let's see the final step in our configuration. We have to decide which tables will become part of the mirror and configure it on both nodes. If you look at the README.md of the Open Exchange project associated to this article, you will see that we configure and deploy two applications that we usually use for training. These applications are deployed automatically when we start Docker containers and NAMESPACES and databases are created by default.

The first application is COMPANY that allows us to save company records and the second is PHONEBOOK that allows us to add personal contacts related to registered companies, as well as customers.

Let's add a company:

And now let's go to create a personal contact for the previous company:

The company data will be registered in the COMPANY database and the contact data in PERSONAL, both databases are mapped so that they can be accessed from the Namespace PHONEBOOK. If we check the tables in both nodes we will see that in mirrorA we have the data the company and the contact, but in mirrorB there is still nothing, as is logical.

Companies registered in mirrorA:

Alright, let's proceed to configure the databases on our mirror. To do this, from our active node (mirrorA), we access the local database administration screen (System Administrator --> Configuration --> System Configuration --> Local Databases) and click on the Add to Mirror option, we have to select from the list all the databases that we want to add and read the message from the screen:

Once we add the databases to the mirror from the active node, we have to make a backup of them or copy the database files (IRIS.dat) and restore them on the passive node. If you decide to make a direct copy of the IRIS.dat files, keep in mind that you have to freeze the writes in the database to be copied, you can see the necessary commands in the following URL of the documentation. In our example, it will not be necessary to pause, since no one but us is writing to it.

Before making this copy of the database files, let's check the status of the mirror from the monitor of the active node:

Let's see the passive node:

As we can see, from the passive node we are being informed that although we have 3 databases configured in the mirror, the configuration has not yet been done. Let's proceed to copy the databases from the active node to the passive one, let's not forget that we must dismount the databases of the passive node to be able to make the copy and for this we will access from the management portal to System Configuration --> Databases and accessing each one of them we proceed to dismount them.

Perfect! Dismounted databases. Let's access the project code associated with the article from Visual Studio Code and see that we have the folders where the IRIS installations are located, sharedA for mirrorA and sharedB for mirrorB. Let's access the folders where the COMPANY, CUSTOMER and PERSONAL databases are located in (/sharedA/durable/mgr) and proceed to copy the IRIS.dat of each database in the mirror to the appropriate directories of mirrorB (/sharedB/durable/mgr).

Once the copy is finished, we mount the mirrorB databases again and check the status of the configured databases from the mirror monitor in mirrorB:

Bingo! Our mirror has recognized the databases and now we just need to activate and update them. To do this, we will click on the Activate action and then on Catchup, which will appear after activation. Let's see how they end up:

Perfect, our databases are already correctly configured in mirror, if we consult the COMPANY database we should see the record that we registered from mirrorA before:

Obviously our COMPANY database has the record that we entered previously in mirrorA, we have copied the entire database after all . Let's proceed to add a new company from mirrorA that we will call "Another company" and proceed to query the COMPANY database table again:

Here we have it. We will only have to make sure that our databases configured in mirror are in read only mode for the passive node mirrorB:

And they are there! in R mode for reading. Well, we already have our mirror configured and our databases synchronized. In the event that we had productions running, it would not be a problem since the mirror is automatically in charge of managing them, starting them in the passive node in the event of a fall in the active node.

Thank you very much to all of you who have reached this point! It was long but I hope you find it useful.

4 Comments
讨论 (4)1
登录或注册以继续
文章
· 四月 24, 2023 阅读大约需 5 分钟

開発者向けウェビナー:アーカイブビデオ一覧

開発者の皆さん、こんにちは!

過去に開催した開発者向けウェビナー アーカイブビデオのまとめページを作成しました。

今後もウェビナーを開催していきますのでこのページをブックマークしていただけると嬉しいですlaugh

プレイリストはこちら👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxB39_H2QMMEG_EsNEFc0ASz

2025年開催分:

✅ウェビナー

2024年開催分:

✅ウェビナー

 

2023年開催分:

✅ウェビナー

✅ InterSystems 医療 x IT セミナー アプリケーション開発編2

 

 

2022年開催分:

    ✅ InterSystems 医療 x IT セミナー アプリケーション開発編1

    ✅モダンホスピタルショウ

    ✅ InterSystems Japan Virtual Summit 2022:プレイリストはこちら👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxAGImHt9sB0n-e7IlHvfcOu

    ✅その他

     

    2021年開催分:ウェビナー

    ✅ウェビナー(1月)

    ✅ウェビナー(10月):プレイリストはこちら👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxBlWFxRfrrrScerJrpo7xjr

     

    2021年開催分:InterSystems Japan Virtual Summit 2021

    ✅開発:プレイリスト👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxAE7EpPq8npD_LFwMRvFRBI

    ✅HL7 FHIRによるインターオペラビリティ:プレイリスト👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxD6pgXvPtS92UeElPq2cDac

    ✅運用・管理:プレイリスト👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxDTIXYG_iwJczwzJbtSM8DF

    ✅マイグレーション:プレイリスト👉https://www.youtube.com/playlist?list=PLzSN_5VbNaxCYZuzDKN5miU0KlTSDlW1Z

    20 Comments
    讨论 (20)1
    登录或注册以继续
    公告
    · 四月 19, 2023

    持续火热报名中:欢迎参加InterSystems 中国技术培训认证

    为支持医疗信息行业人才发展,InterSystems 为中国市场量身定制了贴近需求、灵活、实操性强的技术认证培训计划,由 InterSystems 资深技术专家亲自授课,帮助用户快速掌握 InterSystems 技术,确保用户从快速发展的 InterSystems 技术中获益,以更好地服务于医院信息化建设。点击此处查看课程详情:InterSystems中国技术培训认证

    您的最佳学习路径

     

    为什么要参加 InterSystems 技术认证培训?

    • InterSystems 数据平台技术已成为国内医疗信息化领域的主流技术之一,支持全国数百家大型公立医院核心系统长期稳定运行 20 余年;
    • 专为中国技术用户量身定制,具有贴近需求、灵活、实操性强等特点;
    • InterSystems 资深技术专家亲自授课,帮助用户快速掌握 InterSystems 技术及最佳实践;
    • InterSystems 官方技术认证培训具备更高权威性,可以助力用户更好地运用 InterSystems 技术,并从快速发展的 InterSystems 技术中获益,保持技术先进性。

    哪些用户可以参加认证培训?

    凡使用 InterSystems 技术或对 InterSystems 技术感兴趣的IT从业人员或机构均可参加。

    您可以从技术认证培训中获得哪些技能和成长?
    • 与时俱进的课程更新,理论与实践相结合的学习方式,可以帮助您持续提升对 InterSystems 技术的掌握;
    • 参与 InterSystems 的分级培训计划,考核通过即可获得认证证书;
    • 通过线下课程与活动,拓展技术人脉。
    InterSystems 中国的认证培训讲师团成员是哪些?

    InterSystems 中国资深工程师团队授课。

    报名方式及开课时间是如何安排的?

    报名人数满 5 人即开班,每季度一次,培训方式为线下培训,考试内容含书面测试与上机实践。课程收费请咨询您的 InterSystems 客户经理医院及医疗信息化企业推荐以机构方式参与培训。

    如需报名或咨询更多详情,请联系您的 InterSystems 客户经理,或通过以下方式与 InterSystems 中国团队联系:

    电话:400-601-9890

    邮件:GCDPsales@InterSystems.com

    2 Comments
    讨论 (2)0
    登录或注册以继续
    文章
    · 四月 18, 2023 阅读大约需 2 分钟

    AI generated text detection using IntegratedML

    In recent years, artificial intelligence technologies for text generation have developed significantly. For example, text generation models based on neural networks can produce texts that are almost indistinguishable from texts written by humans.
    ChatGPT is one such service. It is a huge neural network trained on a large number of texts, which can generate texts on various topics and be matched to a given context. 

    A new task for people is to develop ways to recognize texts written not only by people but also by artificial intelligence (AI). This is because, in recent years, neural network-based text generation models have become capable of producing texts that are almost indistinguishable from texts written by humans.

    There are two main methods for AI-written text recognition:

    • Use machine learning algorithms to analyze the statistical characteristics of the text;
    • Use cryptographic methods that can help determine the authorship of the text

    In general, the task of AI text recognition is difficult but important.

    I am happy to present an application for the recognition of the texts generated by AI. During development, I took the benefits of InterSystems Cloud SQL and Integrated ML, which include:

    • Fast and efficient data requests with high performance and speed;
    • User-friendly interface for non-experts in databases and machine learning;
    • Scalability and flexibility to quickly adjust ML models according to requirements;

    In the development and further training of the model, I used an open dataset, namely 35 thousand written texts. Half of the texts were written by hand by a large number of authors, and the other half was generated by AI with ChatGPT.

    Configuration used for GPT model:

    model="text-curie-001"
    temperature=0.7
    max_tokens=300
    top_p=1
    frequency_penalty=0.4
    presence_penalty=0.1

    Next, about 20 basic parameters were determined, according to which further training was carried out. Here are some of the options I used:

    • Characters count
    • Words count
    • Average word length
    • Sentences count
    • Average sentence length
    • Unique words count
    • Stop words count
    • Unique words ratio
    • Punctuations count
    • Punctuations ratio
    • Questions count
    • Exclamations count
    • Digitals count
    • Capital letters count
    • Repeat words count
    • Unique bigrams count
    • Unique trigrams count
    • Unique fourgrams count

    As a result, I got a simple application that you can use for your tasks or just have fun.

    This is what it looks like:

    imageTo try the application you can use online demo or run it locally with your own Cloud SQL account. 

    Also, this application participates in the contest. If you like it, vote for it.

    Welcome to the comments to discuss this app if you were interested.
     

    1 Comment
    讨论 (1)1
    登录或注册以继续
    文章
    · 四月 16, 2023 阅读大约需 4 分钟

    Tuples ahead

    Overview

    Cross-Skilling from IRIS objectScript to Python it becomes clear there are some fascinating differences in syntax.

    One of these areas was how Python returns Tuples from a method with automatic unpacking.

    Effectively this presents as a method that returns multiple values. What an awesome invention :)

    out1, out2 = some_function(in1, in2)

    ObjectScript has an alternative approach with ByRef and Output parameters.

    Do ##class(some_class).SomeMethod(.inAndOut1, in2, .out2)

    Where:

    • inAndOut1 is ByRef
    • out2 is Output

    The leading dot (".") in front of the variable name passes ByRef and for Output.

    The purpose of this article is to describe how the community PyHelper utility has been enhanced to give a pythonic way to take advantage of ByRef and Output parameters. Gives access to %objlasterror and has an approach for Python None type handling.
     

      Example ByRef

      Normal invocation for embedded python would be:

      oHL7=iris.cls("EnsLib.HL7.Message")._OpenId('er12345')

      When this method fails to open, variable "oHL7" is an empty string.
      In the signature of this method there is a status parameter that is available to object script that gives an explanation of the exact problem.
      For example:

      • The record may not exist
      • The record couldn't be opened in default exclusive concurrency mode ("1"), within timeout
      ClassMethod %OpenId(id As %String = "", concurrency As %Integer = -1, ByRef sc As %Status = {$$$OK}) As %ObjectHandle

      The TupleOut method can assist returning the value of argument sc, back to a python context.
       

      > oHL7,tsc=iris.cls("alwo.PyHelper").TupleOut("EnsLib.HL7.Message","%OpenId",['sc'],1,'er145999', 0)
      > oHL7
      ''
      > iris.cls("%SYSTEM.Status").DisplayError(tsc)
      ERROR #5809: Object to Load not found, class 'EnsLib.HL7.Message', ID 'er145999'1
      ```

      The list ['sc'] contains a single item in this case. It can return multiple ByRef values, and in the order specified. Which is useful to automatically unpack to the intended python variables.

      Example Output parameter handling

      Python code:

      > oHL7=iris.cls("EnsLib.HL7.Message")._OpenId('145')
      > oHL7.GetValueAt('<%MSH:9.1')
      ''

      The returned string is empty but is this because the element is actually empty OR because something went wrong.
      In object script there is also an output status parameter (pStatus) that can be accessed to determine this condition.

      Object script code:

      > write oHL7.GetValueAt("<%MSH:9.1",,.pStatus)
      ''
      > Do $System.Status.DisplayError(pStatus)
      ERROR <Ens>ErrGeneral: No segment found at path '<%MSH'

      With TupleOut the equivalent functionality can be attained by returning and unpacking both the method return value AND the status output parameter.

      Python code:

      > hl7=iris.cls("EnsLib.HL7.Message")._OpenId(145,0)
      > val, status = iris.cls("alwo.PyHelper").TupleOut(hl7,"GetValueAt",['pStatus'],1,"<&$BadMSH:9.1")
      > val==''
      True
      > iris.cls("%SYSTEM.Status").IsError(status)
      1
      > iris.cls("%SYSTEM.Status").DisplayError(status)
      ERROR <Ens>ErrGeneral: No segment found at path '<&$BadMSH'1


      Special variable %objlasterror

      In objectscript there is access to percent variables across method scope.
      There are scenarios where detecting or accessing special variable %objlasterror is useful after calling a CORE or third party API
      The TupleOut method allows access to %objlasterror, as though it has been defined as an Output parameter, when invoking methods from Python

      > del _objlasterror
      
      > out,_objlasterror=iris.cls("alwo.PyHelper").TupleOut("EnsLib.HL7.Message","%OpenId",['%objlasterror'],1,'er145999', 0) 
      
      > iris.cls("%SYSTEM.Status").DisplayError(_objlasterror)
      ERROR #5809: Object to Load not found, class 'EnsLib.HL7.Message', ID 'er145999'1

      When None is not a String

      TupleOut handles python None references as objectscript undefined. This allows parameters to default and methods behave consistently.
      This is significant for example with %Persistent::%OnNew where the %OnNew method is not triggered when None is supplied for initvalue, but would be triggered if an empty string was supplied.

      In objectscript the implementation might say:

      do oHL7.myMethod("val1",,,"val2")

      Note the lack of variables between commas.

      TupleOut facilitates the same behavior with:

      Python:

      iris.cls("alwo.PyHelper").TupleOut(oHL7,"myMethod",[],0,"val1",None,None,"val2")

      Another way to consider this, is being able to have one line implementation of invocation code, that behaves flexibly depending on pre-setup of variables:

      Object Script:

      set arg1="val1"
      kill arg2
      kill arg3
      set arg4="val2"
      do oHL7.myMethod(.arg1, .arg2, .arg3, .arg4)

      TupleOut facilitates the same behavior with:

      Python:

      arg1="val1"
      arg2=None
      arg3=None
      arg4="val2"
      iris.cls("alwo.PyHelper").TupleOut(oHL7,"myMethod",[],0,arg1,arg2,arg3,arg4)

      List and Dictionaries

      When handling parameters for input, ByRef and Output, TupleOut utilizes PyHelper automatic mapping between:
      IRIS Lists and Python Lists
      IRIS Arrays and Python Arrays
      Where it takes care to always use strings to represent dictionary keys when moving from IRIS Arrays to Python Dict types.

      Conclusion

      Hope this article helps inspire new ideas and discussion for embedded Python ideas and suggestions.

      Hope also it gives encouragement to explore the flexibility for how IRIS can easily bend to meet new challenges.

      讨论 (0)1
      登录或注册以继续