查找

文章
· 三月 24, 2022 阅读大约需 3 分钟

GlobalToJSON-XL-Academic

This package offers a utility to export an XLarge Global into a JSON object file and to show
or import it again. In a previous example, this all was processed in memory. But if this is a
large Global you may either experience <MAXSTRING> or an <STORE>  error
if the generated JSON structure exceeds available memory.
 


Academic refers to the structure created.

  • each node of the Global including the top node is represented as a JSON object
  • {"node":<node name>,"val":<value stored>,"sub":[<JSON array of subscript objects>]}
  • value and subscript are optional but one of them always exists for a valid node
  • the JSON object for the lowest level subscript has only value but no further subscript.

So this is basically a 1:1 image of your global and it's exported to a file (default: gbl.json)
In addition to the export, a show method displays the generated file.
The tricky part is the import from file. It is a customize JSON parser as all others just
operate in memory. this fails with a reasonable-sized Global
(eg. ^oddDEF with ~ 1.7 million nodes takes ~ 78MB JSON file.)

Prerequisites

Make sure you have git and Docker desktop installed.

Installation

Clone/git pull the repo into any local directory

git clone https://github.com/rcemper/GlobalToJSON-XLA.git    

Run the IRIS container with your project:

docker-compose up -d --build

How to Test it

This is the pre-loaded Global ^dc.MultiD for testing.
 

There are 3 methods available

  • ClassMethod export(gref As %String = "^%", file = "gbl.json") As %String file = 0 >>> display to terminal
  • ClassMethod show(file = "gbl.json") As %String
  • ClassMethod import( file = "gbl.json", test = 0) As %String test = 1 >>> load into a PPG

Open IRIS terminal

$ docker-compose exec iris iris session iris

USER>write ##class(dc.GblToJSON.XLA).export("^dc.MultiD")
File gbl.json created

USER>write ##class(dc.GblToJSON.XLA).export("^dc.MultiD",0)
{"node":"^dc.MultiD"
,"val":5
,"sub":[
{"node":1
,"val":"$lb(\"Braam,Ted Q.\",51353)"
,"sub":[
{"node":"mJSON"
,"val":"{}"
}
---  truncated ---

USER>>write ##class(dc.GblToJSON.XLA).show()
{"node":"^dc.MultiD"
,"val":5
,"sub":[
{"node":1
,"val":"$lb(\"Braam,Ted Q.\",51353)"
---  truncated ---  

validated JSON object

Now we want to verify the load function as a test into a PPG

USER>write ##class(dc.GblToJSON.XLA).import(,1)
Global ^||dc.MultiD loaded

USER>zwrite ^||dc.MultiD
^||dc.MultiD=5
^||dc.MultiD(1)=$lb("Braam,Ted Q.",51353)
^||dc.MultiD(1,"mJSON")="{}"
^||dc.MultiD(2)=$lb("Klingman,Uma C.",62459)
^||dc.MultiD(2,2,"Multi","a")=1
^||dc.MultiD(2,2,"Multi","rob",1)="rcc"
^||dc.MultiD(2,2,"Multi","rob",2)=2222
^||dc.MultiD(2,"Multi","a")=1
^||dc.MultiD(2,"Multi","rob",1)="rcc"
^||dc.MultiD(2,"Multi","rob",2)=2222
^||dc.MultiD(2,"mJSON")="{""A"":""ahahah"",""Rob"":""VIP"",""Rob2"":1111,""Rob3"":true}"
^||dc.MultiD(3)=$lb("Goldman,Kenny H.",45831)
^||dc.MultiD(3,"mJSON")="{}"
^||dc.MultiD(4)=$lb("","")
^||dc.MultiD(4,"mJSON")="{""rcc"":122}"
^||dc.MultiD(5)=$lb("","")
^||dc.MultiD(5,"mJSON")="{}"
 
USER>

q.a.d.

Code Quality

CodeQuality

Do not wonder about some strange code constructs.
They were required as CodeQuality neither understands the NEW command, nor the scope of %variables  !   sad

Video

Online Demo Terminal
Online Demo SMP

Previous article in DC

GitHub

1 Comment
讨论 (1)1
登录或注册以继续
文章
· 三月 23, 2022 阅读大约需 1 分钟

ObjectScript言語でイメージファイルをWEBサーバからダウンロードする方法

これは InterSystems FAQ サイトの記事です。
 

以下のコードは、https://www.intersystems.com/assets/intersystems-logo.png をダウンロードし、c:\temp\test.pngとしてファイルを保存する例になります。

以下のコードを動作させるためには、SSLTESTという名前のSSL定義を作成しておく必要があります。

 

ClassMethod download() As %Status
{
    Set sc = $$$OK
    Set httprequest=##class(%Net.HttpRequest).%New()
    set httprequest.Port = 443
    set httprequest.Https = 1
    set httprequest.SSLConfiguration = "SSLTEST"
    Set httprequest.Server="www.intersystems.com"
    Do httprequest.Get("/assets/intersystems-logo.png")
    Set httpresponse=httprequest.HttpResponse
    Set file=##class(%File).%New("c:\temp\test.png")
    Do file.Open("NWUK\BIN\")
    Do file.CopyFrom(httpresponse.Data)
    Do file.Close()
    Return sc
}
讨论 (0)1
登录或注册以继续
文章
· 三月 19, 2022 阅读大约需 2 分钟

Compare Global Write in ePy vs. ISOS/COS

This example demonstrates the difference you may experience when you write to
Gllobals directly from Embedded Python compared to native ObjectScript.

To make this demo useful I start 2 background jobs that simply write sequentially
to a dedicated global. A common control method signals for a synchronous start.
Similar a common stop & view interrupts data feeding.

That's the principle process:

   

As 2 jobs run in parallel the probability of just using sequential blocks is reduced.
During the development of this demo, I detected that the JOB command has problems
with ClassMethods in Embedded Python.

WARNING:
Due to the poor dimensioned Community License the start of the background jobs
may fail if you have other connections active at the same time.

During my series of tests, I found that ePy writes ~53% of the pure ISOS code
See also examples below.

The control method prepares the background processes and has 3 commands:

  • 0 .. stop background jobs and show last run results
  • 1 .. interrupt background jobs and show last run results
  • 2 .. run data load in background jobs.
    • Loading is interrupted after 60 sec to prevent breaking of the DB if not done manually

Example

USER>do ##class(dc.rcc.ePYvsISOS).do()

JobStart ISOS #3841
JobStart  ePy #3842
Job Control [0=stop,1=view,2=go]: 2
Job Control [0=stop,1=view,2=go]: 1
^ePy(1,492029)=2022-03-11 20:49:32.851286
^ISOS(1,927383)=2022-03-11 20:49:32.851289

Job Control [0=stop,1=view,2=go]:
Job Control [0=stop,1=view,2=go]: 2
Job Control [0=stop,1=view,2=go]:
Job Control [0=stop,1=view,2=go]:
Job Control [0=stop,1=view,2=go]:
Job Control [0=stop,1=view,2=go]:
^ePy(2,4149270)=2022-03-11 20:50:54.620362
^ISOS(2,7719690)=2022-03-11 20:50:54.620371

Job Control [0=stop,1=view,2=go]:2
Job Control [0=stop,1=view,2=go]:
Job Control [0=stop,1=view,2=go]: 0
^ePy(3,3148765)=2022-03-11 20:50:54.620362
^ISOS(3,6714519)=2022-03-11 20:50:54.620371


Video

GitHub  

1 Comment
讨论 (1)1
登录或注册以继续
InterSystems 官方
· 三月 15, 2022

InterSystems Kubernetes Operator 3.3 is published

The InterSystems Kubernetes Operation (IKO) version 3.3 is now available via the WRC download page and the InterSystems Container Registry.

IKO simplifies working with InterSystems IRIS or InterSystems IRIS for Health in Kubernetes by providing an easy-to-use irisCluster resource definition. See the documentation for a full list of features, including easy sharding, mirroring, and configuration of ECP.

IKO 3.3 Highlights:

  • Support for 2021.2 and 2022.1 editions of InterSystems IRIS & IRIS for Health
  • Support for Kuberentes 1.21
  • Deploy common System Alerting and Monitoring (SAM) configurations as part of your irisCluster
  • InterSystems API Manager (IAM) can now also be deployed and managed as part of the irisCluster
  • Automatic tagging of mirror pair active side, so a service can always point to the active mirror member.
讨论 (0)1
登录或注册以继续