查找

文章
· 十一月 21 阅读大约需 1 分钟

InterSystems IRIS - Shells

InterSystems IRIS - Shells

Olá,

Quando abrimos um terminal no IRIS estamos entrando no ambiente ObjectScript. Nele podemos executar comandos do IRIS, tais como:

Ou seja, o comando ObjectScript é executado no shell corrente. Mas é sempre bom lembrar que o IRIS tem outros shells:

  • SQL
  • Python
  • TSQL
  • MDX

Uma questão bem interessante são os atalhos. Podemos acessar estes shells pelas suas chamadas ou através de atalhos, conforme a tabela abaixo:

Shell

Chamada

Atalho

SQL

Do $SYSTEM.SQL.Shell()

:sql

Python

Do $SYSTEM.Python.Shell

:py

TSQL

Do $SYSTEM.SQL.TSQLShell()

:tsql

MDX

Do ##CLASS(%DeepSee.Utils).%Shell()

:mdx

 

Simples assim:

Shell Python:

Shell SQL:

Shell TSQL:

Shell MDX:

 

Estes shells são extremamente úteis para execução de comandos e testes. Ter estes atalhos à mão poupa bastante tempo e eles são muito úteis quando estamos desenvolvendo.

No link da documentação https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GCLI_shells podemos ver em detalhes esses shells e suas características. Eles tem vários recursos interessantes que valem a pena serem explorados.

Até a próxima!

讨论 (0)1
登录或注册以继续
文章
· 十一月 21 阅读大约需 2 分钟

CCR: Block markMoveToXXXXComplete, markCANCELComplete with Undeployed ItemSets

As part of improvements regarding CCR usage and usability, certain transitions are now blocked when a CCR Record has undeployed ItemSets for required Environments. 

 

To promote best practice, when a Tier 1 or Tier 2 CCR moves between Environments, it is important that ItemSets are deployed to required Environments before confirming that the CCR has successfully been implemented in the next Environment. Previously, when progressing a CCR from one Environment to the next, users were not required to deploy ItemSets before performing markMoveToXXXXComplete or markCANCELComplete transitions. Now, both of these transitions are blocked if there are Undeployed ItemSets for the next Environment(s). 

There are a few important things to note with regards to this change:

  • For Tier 1 CCRs only: If a user wishes to cancel a CCR, and has Undeployed ItemSets before cancelling, the behavior of CCR is as follows:
    • any Undeployed ItemSets that exist before cancel is chosen are abandoned automatically (this is not new behavior, but important to note). Abandoned ItemSets do not prevent a user from performing the markCANCELComplete transition. 
    • new ItemSets are created for backing out changes in each effected Environment
    • These new ItemSets that are created must be deployed before performing the markCANCELComplete transition.  
  • For Tier 2 CCRs: we do not do an automatic backout, which means that undeployed ItemSets will remain undeployed until the user cleans them up, and thus, the markCANCELComplete transition will be blocked until all undeployed ItemSets are abandoned or deployed before changing Phase or moving to Cancelled by the user to encourage best practice

These restrictions are enforced on all Secondary Environments which are set to true for "Requires ItemSets." If Environments are kept up to date another way, (e.g. DB refresh, AutoDownload Task), it must be ensured that the Requires ItemSet box is unchecked in order to prevent the new workflow check from impeding work.

Please do not hesitate to comment here with questions or reach out in your normal CCR support channels.

1 条新评论
讨论 (1)2
登录或注册以继续
公告
· 十一月 21

[Video] Optimizing Parallel Aggregation Using Shared Globals

Hey Community!

We're happy to share a new video from our InterSystems Developers YouTube:

⏯  Optimizing Parallel Aggregation Using Shared Globals @ Ready 2025

This presentation explains an optimization to parallel aggregation in SQL queries using shared globals. Previously, worker processes computed intermediate results separately and sent them to a parent process for serial aggregation, creating delays. The new approach lets all workers write directly and in parallel to a shared global, eliminating the parent bottleneck.

This greatly reduces wait time, especially for queries with many groups. Testing on 20 million rows showed up to a 38% performance improvement, while also simplifying query execution and reducing code complexity.

🗣 Presenter: Elie Eshoa, Systems Developer, InterSystems

Enjoy watching, and subscribe for more videos! 👍

讨论 (0)1
登录或注册以继续
文章
· 十一月 21 阅读大约需 1 分钟

Production Terminal Commands

Terminal Commands for Production:

  • Production Start, Stop, Update, Recover and Clean Production

Do ##class(Ens.Director).StartProduction(“ProductionName”)

Do ##class(Ens.Director).StopProduction()

Do ##class(Ens.Director).UpdateProduction()

Do ##class(Ens.Director).RecoverProduction()

Do ##class(Ens.Director).CleanProduction()

Abort Messages in the queue:

               d ##class(Ens.Queue).AbortQueue(“Component Name”)

Get InstanceName :

              W !,##class(%SYS.System).GetUniqueInstanceName()

Get Node Name:

              W  !,##class(%SYS.System).GetNodeName()

Terminate JobId :

   d $SYSTEM.Process.Terminate(jobid)

 Enable Namespace:

     do ##class(%EnsembleMgr).EnableNamespace($namespace)

Enable ConfigItem:

     Do ##class(Ens.Director).EnableConfigItem("ConfigNameHere", 0, 1)

Get GUID:

     write $System.Util.CreateGUID()

Get CPU Info:

    d $system.CPU.Dump()

Get Number of CPUs: Returns the number of virtual CPUs (also known as logical CPUs or threads) on the system.

   W $SYSTEM.Util.NumberOfCPUs()

Get Free Space: Display all the namespaces database free spaces

do ALL^%FREECNT

Thanks,

讨论 (0)1
登录或注册以继续
文章
· 十一月 21 阅读大约需 3 分钟

My experience with APIs and POS integration.

Hola amigo! 😊 Cómo estás hoy,

I would like to share a small part of my learnings from my first ever official project: POS/EDC machine integration with our billing system. This was an exciting challenge where I got hands-on experience working with APIs and vendors. 

How does a Payment Machine actually work?

It's simple, start by initiating/creating a transaction, then retrieve its payment status.

Here, initiate/create refers to POST method and Retrieve refers to GET.

Workflow... 

Let us assume that the vendor has given us a document with both these APIs (Create and Fetch Payment Status). Samples listed below -
 

CREATE TRANSACTION:

url/endpoint: https://payvendor.com/create-transaction
method: POST
payload: 
{
    "reference_id": "2345678",
    "pos_id": "PISC98765",
    "date_time": "MMDDYYYYHHMMSS"
    "amount": 100
}
response: [200]
{
    "reference_id": "2345678",
    "pos_id": "PISC98765",
    "date_time": "MMDDYYYYHHMMSS"
    "unn": "456789876546787656"
}

FETCH PAYMENT STATUS:

url/endpoint: https://payvendor.com/get-status
method: GET
payload: ?reference_id="2345678"
response: [200]
{
    "reference_id": "2345678",
    "pos_id": "PISC98765",
    "date_time": "MMDDYYYYHHMMSS"
    "unn": "456789876546787656"
    "status": "paid"
    "amount": 100
}

 

How do we use these APIs? Let's find out... 🫡

To consume these APIs in cache objectscript, we have a module or a class to make HTTP requests from within. %Net.HttpRequest.

Basic:

  • Create an instance of %Net.HttpRequest.
  • Set the url and the HTTP method.
  • Add the header and the body. [if needed]
  • Send the request to the server.
  • Handle the response.
; --------- POST REQUEST EXAMPLE ---------
Set req = ##class(%Net.HttpRequest).%New()  ; creates an instance of this class
Set req.Server = "https://payvendor.com"    ; the server
Set req.Location = "/create-transaction"    ; the endpoint
Set req.Https = 1       ; 0 if http / 1 if https
Set req.ContentType = "application/json"    ; ContentType
; ---- create the JSON body ----
Set obj = ##class(%DynamicObject).%New()
Set obj."reference_id" = "2345678"      ; unique
Set obj."pos_id" = "PISC98765"          ; device number
Set obj."date_time" = $ZSTRIP($ZDATETIME($HOROLOG,8), "*P") 
Set obj."amount" = 100
; -------------------------------
; ---- send request ----
Do req.EntityBody.Write(obj.%ToJSON())
Do req.Post()           ; .Post() will trigger the call
; ----------------------
; ---- Response ----
Write req.HttpResponse.StatusCode,!     ; HTTP STATUS CODE
Write req.HttpResponse.Data.Read(),!    ; HTTP STATUS MESSAGE
; ------------------

After creating the transaction, we can maintain a table (preferred) or a global to maintain logs against each transaction. 

; --------- GET REQUEST EXAMPLE ---------
Set req = ##class(%Net.HttpRequest).%New()  ; creates an instance of this class
Set req.Server = "https://payvendor.com"    ; the server
Set req.Location = "/get-status"    ; the endpoint
Set req.Https = 1       ; 0 if http / 1 if https
; ---- Query Parameters ----
Do req.SetParam("reference_id", "2345678")

; ---- send request ----
Do req.Get()           ; .Get() will trigger the call
; ---- Response ----
Set stsCode = req.HttpResponse.StatusCode,!     ; HTTP STATUS CODE
If stsCode=200 {
    Set objResponse = req.HttpResponse.Data.Read()
    Set objData = ##class(%DynamicObject).%FromJSON(objResponse)
    Set payStatus = objData.status              ; payment status
}
; ------------------

This is how we fetch the payment status. After we fetch the status, we can update the same in the billing system and our logs too.

 

This workflow is simple, but as we code more, we can evolve better frameworks and approaches. Over my experience, I’ve successfully integrated 5 POS vendors and 3 payment gateways with our billing system. If you have any questions or need guidance, feel free to reach out!

Also open for feedback. :)

 

Thanks...

讨论 (0)1
登录或注册以继续