Encontrar

文章
· 九月 11, 2024 阅读大约需 3 分钟

Code Scanner

This story has followed me for more than 20 years.

In the early days of ObjectScript, the volume of $Functions was limited.
You had to write it as part of your program.
But the functionality was often implemented. It just had no name.
To use it, you had "system" calls using $ZU() functions. See Reference

Over time most $ZU() became deprecated and replaced by "official" $Functions.
But how to find then over a thousand lines of code written over decades
by an uncounted number of programmers that were gone with the wind.
Studio was something fresh and not so performant.

So I created a Search Utility to Scan all corners of possible Objectscript 
It is written in pure ObjectScript avoiding all System Methods.
This made it independent and was very useful in preparing migration to IRIS

It looks like this;

USER>do ^rcc.find
----------------
 
enter search string [$ZU] <blank> to exit: RCC
          Verbose? (0,1) [0]:
          Force UpperCase? (1,0) [1]:
 
enter code type (CLS,MAC,INT,INC,ALL) [ALL]: CLS
 
select namespace (ALL,%SYS,DOCBOOK,ENSDEMO,ENSEMBLE,SAMPLES,USER) [USER]:
 
** Scan Namespace: USER **
 
** CLS **
** 2      User.ConLoad
** 15     User.Main
** 3      csp.form
** 3      csp.winner
** 2      dc.rcc.Contest
** 37     dc.rcc.Main
** 1      dc.rcc.Prize
** 63 CLS **
----------------
 
enter search string [$ZU] <blank> to exit: RCC
          Verbose? (0,1) [0]:
          Force UpperCase? (1,0) [1]:
 
enter code type (CLS,MAC,INT,INC,ALL) [ALL]: mac
 
select namespace (%SYS,DOCBOOK,ENSDEMO,ENSEMBLE,SAMPLES,USER) [USER]:
 
** Scan Namespace: USER **
 
** MAC **
** 9      zpmshow
** 9 MAC **
----------------
 
enter search string [$ZU] <blank> to exit: RCC
          Verbose? (0,1) [0]: 1
          Force UpperCase? (1,0) [1]:
 
enter code type (CLS,MAC,INT,INC,ALL) [ALL]: MAC
 
select namespace (%SYS,DOCBOOK,ENSDEMO,ENSEMBLE,SAMPLES,USER) [USER]:
 
** Scan Namespace: USER **
 
** MAC **
 
** zpmshow **
#define rcc ^||rcc
        kill $$$rcc
                ,$$$rcc($i($$$rcc))=lin
        else { zw  zw $$$rcc b }
        for i=1:1:$$$rcc {
                if $d($$$rcc(+nb)) {
                set cmd=$s(ac="u":"un",1:"")_"install "_$li($$$rcc(nb))
        set lin=$g($$$rcc(ln))
                    ,$$$rcc(ln)=lin_$lb(desc)
** 9      zpmshow
** 9 MAC **
----------------
  • First enter some string to be searched
  • Verbose allows you to see matches in detail
  • Force UpperCase is useful to make scan case insensitive
  • Code type allows splitting scan into several steps
  • Namespace defines where the scan is executed
    • %-routines and %-classes are always excluded for namespaces other than %SYS

Practical hint

  • run a scan over ALL non-verbose to find affected code types
  • next run over INC and apply the required changes
  • then run CLS and apply the required changes
  • then run over MAC and apply the required changes
  • most likely there is no need for any fix in INT

GitHub

Video

Demo Server SMP
Demo Server WebTerminal
 

4 Comments
讨论 (4)2
登录或注册以继续
公告
· 九月 11, 2024

Nuevos vídeos de InterSystems Reports Designer disponibles

Nos alegra compartir que el equipo de Servicios de Aprendizaje ha añadido recientemente nuevos contenidos a nuestra Ruta de Aprendizaje de InterSystems Reports. Estos últimos vídeos, creados por nuestro socio, insightsoftware, proporcionan instrucciones para desarrollar informes con InterSystems Report Designer.

En estos tres breves vídeos, aprenderéis a:

  • Empezar a utilizar el Diseñador de informes: Una orientación que os guiará a través de la creación de un Informe de bandas.
  • Añadir Fórmulas a los Informes: Aprended a incorporar fórmulas en vuestros informes ya existentes.
  • Previsualizar y Exportar Informes: Explorad las diferencias entre los informes de página y web, junto con varios formatos de exportación.

¡Dirigíos a la Ruta de aprendizaje de InterSystems Reports para verlos y mejorar vuestras habilidades de creación de informes hoy mismo!

讨论 (0)1
登录或注册以继续
文章
· 九月 11, 2024 阅读大约需 4 分钟

Utilisation des bons types de stream dans un environnement mirroré

Bonjour,

je vous soumets cet article sous forme d'ADR (Architecture decision record) que nous avons rédigé dans nos équipes.
L'objectif était d'être pertinent dans le choix de nos types de stream dans un contexte mirroré.
A noter que des éléments peuvent être utiles même dans un environnement sans miroir.

1 Comment
讨论 (1)2
登录或注册以继续
文章
· 九月 10, 2024 阅读大约需 3 分钟

第二十一章 加密 SOAP 主体 - 变体:使用可识别证书的信息

第二十一章 加密 SOAP 主体 - 变体:使用可识别证书的信息

<BinarySecurityToken> 包含序列化、base-64 编码格式的证书。可以忽略此令牌,而改用标识证书的信息;接收方使用此信息从相应位置检索证书。为此,请使用上述步骤,并进行以下更改:

讨论 (0)1
登录或注册以继续
文章
· 九月 10, 2024 阅读大约需 4 分钟

eBPF: Parca - Continuous Profiling for IRIS Workloads

So if you are following from the previous post or dropping in now, let's segway to the world of eBPF applications and take a look at Parca, which builds on our brief investigation of performance bottlenecks using eBPF, but puts a killer app on top of your cluster to monitor all your iris workloads, continually, cluster wide!  

Continous Profiling with Parca, IRIS Workloads Cluster Wide

Parca

Parca is named after the Program for Arctic Regional Climate Assessment (PARCA) and the practice of ice core profiling that has been done as part of it to study climate change. This open source eBPF project aims to reduce some carbon emissions produced by unnecessary resource usage of data centers, we can use it to get "more for less" with resource consumption, and optimize on our cloud native workloads running IRIS.  

Parca is a continuous profiling project. Continuous profiling is the act of taking profiles (such as CPU, Memory, I/O and more) of programs in a systematic way. Parca collects, stores and makes profiles available to be queried over time, and due to its low overhead using eBPF can do this without detrimenting the target workloads.

Where

If you thought monitoring a kernel that runs multiple linux kernel namespaces was cool on the last post, Parca manages to bring all of that together in one spot, with a single pane of glass across all nodes (kernels) in a cluster.


 

Parca two main components:

  • Parca: The server that stores profiling data and allows it to be queried and analyzed over time.
  • Parca Agent: An eBPF-based whole-system profiler that runs on the nodes.

To hop right into "Parca applied", I configured Parca on my cluster with the following:
 

 kubectl create namespace parca
 kubectl apply -f https://github.com/parca-dev/parca/releases/download/v0.21.0/kubernetes-manifest.yaml
 kubectl apply -f https://github.com/parca-dev/parca-agent/releases/download/v0.31.1/kubernetes-manifest.yaml

Results in a daemonset, running the agent on all 10 nodes, with about 3-4 iris workloads scattered throughout the cluster.

Note: Parca runs standalone too, no k8s reqd!

Lets Profile

Now, I know I have a couple of workloads on this cluster of interest, one of them is a fhir workload that is servicing a GET on the /metadata endpoint for 3 pods on an interval for friends I am trying to impress at an eBPF party, the other is a straight up 2024.2 pod running the following as a JOB:

Class EBPF.ParcaIRISPythonProfiling Extends %RegisteredObject
{

/// Do ##class(EBPF.ParcaIRISPythonProfiling).Run()
ClassMethod Run()
{
    While 1 {
            HANG 10
            Do ..TerribleCode()
            Do ..WorserCode()
            Do ..OkCode()
            zn "%SYS"
            do ##class(%SYS.System).WriteToConsoleLog("Parca Demo Fired")
            zn "PARCA"
    }
}

ClassMethod TerribleCode() [ Language = python ]
{

    import time
    def terrible_code():
        time.sleep(30)
        print("TerribleCode Fired...")
    terrible_code()
}

ClassMethod WorserCode() [ Language = python ]
{
    import time
    def worser_code():
        time.sleep(60)
        print("WorserCode Fired...")
    worser_code()
}

ClassMethod OkCode() [ Language = python ]
{

    import time
    def ok_code():
        time.sleep(1)
        print("OkCode Fired....")
    ok_code()
}

}

Now, I popped a metallb service on the parca service and dove right into the console, lets take a peak at what we can observe in the two workloads.

Python Execution

So I didnt get what I wanted out of the results here, but I did get some hints on how IRIS is doing the whole python integration thing.

In Parca, I constrained on the particular pod, summed it by the same thing and selected a sane timeframe:

And here was the resulting pprof:

I can see irisdb doing the Python Execution, traces with ISCAgent, and on the right I can see basically iris init stuff in the container.  Full transparency, I was expecting to see the python methods, so I have to work on on that, but I did learn pythoninit.so is the star of the python call out show.


FHIR Thinger

Now this one does show some traces from a kernel perspective relevant to a FHIR workload.  On the left, you can see the apache threads for the web server standing up the api, and you can also see in the irisdb traces the unmarshalling of JSON.

All spawning from a thread by what is known as a `zu210fun` party!

Now, lets take a look at the same workload in Grafana as Parca exports to observability:



Not earth shattering I know, but the point being distributed profiling an IRIS app with an eBPF, in lightweight fashion, across an entire cluster... with the sole goal of and not ever having to ask a customer for a pButtons report again!

讨论 (0)1
登录或注册以继续