查找

问题
· 三月 8, 2024

Classic View Page for CCR to be deprecated - did we miss anything on the new UI?

Last year we introduced our new angular-based View page for CCR as part of the UI refresh for the application.  This has been used very effectively by close to 1000 users around the world as the default UI for viewing CCR, and as a result we're getting ready to completely disable the "classic" View page. 

Benefits of the new page include:

  • modern look and feel
  • reworked UX   
  • dynamic data updates
  • new tabbed access to reduce scrolling
  • dynamic workflow visualization (coming soon)

Before we turn off access to the old UI, we really want to make sure that users are able to do everything they need to with the new UI.  Have you found that you need to revert to using the Classic View page to accomplish certain tasks?  If so, please let us know in the comments below.

6 Comments
讨论 (6)4
登录或注册以继续
文章
· 三月 8, 2024 阅读大约需 3 分钟

IKO - Lessons Learned (Part 4 - The Storage Class)

The IKO will dynamically provision storage in the form of persistent volumes and pods will claim them via persistent volume claims.

But storage can come in different shapes and sizes. The blueprint to the details about the persistent volumes comes in the form of the storage class.

This raises the question: we've deployed the IrisCluster, and haven't specified a storage class yet. So what's going on?

You'll notice that with a simple

kubectl get storageclass

you'll find the storage classes that exist in your cluster. Note that storage classes are a cluster wide resource, not per namespace as other objects, like our pods and services.

You'll also notice that one of the storage classes is marked as default. This is the one that the IKO takes when we do not specify any. What if none are marked as default? In this case we have the following problem:

Persistent volumes are not able to be created, which in turn means persistent volume claims are not bound and therefore the pod is stuck in a pending state. It's like going to a restaurant and after looking at the menu telling the waiter/waitress that you'd like to order food, close the menu, hand it back to your server, and say thanks. We need to be more specific or our instructions are so vague that they mean nothing.

To solve this problem you could either set a default storage class in your cluster, or set the storage class name field in the CRD (this way you don't need to change what your default cluster storage class is in case you choose to use the non default storage class):

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: simple
spec:
  licenseKeySecret:
    #; to activate ISC license key
    name: iris-key-secret
  configSource:
    #; contains CSP-merge.ini, which is merged into IKO's
    #; auto-generated configuration.
    name: iris-cpf
  imagePullSecrets:
    - name: intersystems-pull-secret
  storageClassName: your-sc

  topology:
    data:
      image: containers.intersystems.com/intersystems/irishealth:2023.3
      compatibilityVersion: "2023.3"
      mirrored: true
      webgateway:
        image: containers.intersystems.com/intersystems/webgateway:2023.3
        type: apache
        replicas: 1
        applicationPaths:
          - /csp/sys
          - /csp/healthshare
          - /api/atelier
          - /csp/broker
          - /isc
          - /oauth2
          - /ui
        loginSecret:
           name: iris-webgateway-secret
           
    arbiter:
     image: containers.intersystems.com/intersystems/arbiter:2023.3
    webgateway:
      replicas: 1
      image: containers.intersystems.com/intersystems/webgateway:2023.3
      applicationPaths:
        #; All of the IRIS instance's system default applications.
        #; For Management Portal only, just use '/csp/sys'.
        #; To support other applications, please add them to this list.
        - /csp/sys
        - /csp/broker
        - /api
        - /isc
        - /oauth2
        - /ui
        - /csp/healthshare
      alternativeServers: LoadBalancing
      loginSecret:
        name: iris-webgateway-secret

  serviceTemplate:
    # ; to enable external IP addresses
    spec:
      type: LoadBalancer

Note that there are specific requirements for the storage class, as documented in the docs:

"Any storage class you define must include Kubernetes setting volumeBindingMode: WaitForFirstConsumerOpens for correct operation of the IKO."

Furthermore, I like to use allowVolumeExpansion: true.

Note that the provisioner of your storage class is platform specific.

The storage class pops up all over the CRD so remember to set it when you are customizing your storage for your cluster, in order to make sure you use the storage class that's right for you.

1 Comment
讨论 (1)1
登录或注册以继续
文章
· 三月 6, 2024 阅读大约需 9 分钟

Connecting to DynamoDB Using Embedded Python: A Tutorial for Using Boto3 and ObjectScript to Write to DynamoDB

Introduction

As the health interoperability landscape expands to include data exchange across on-premise as well as hosted solutions, we are seeing an increased need to integrate with services such as cloud storage. One of the most prolifically used and well supported tools is the NoSQL database DynamoDB (Dynamo), provided by Amazon Web Services (AWS).

4 Comments
讨论 (4)1
登录或注册以继续
文章
· 三月 6, 2024 阅读大约需 3 分钟

IKO - Lessons Learned (Part 3 - Services 101 and The Sidecars)

The IKO allows for sidecars. The idea behind them is to have direct access to a specific instance of IRIS. If we have mirrored data nodes, the web gateway will (correctly) only give us access to the primary node. But perhaps we need access to a specific instance. The sidecar is the solution.

Building on the example from the previous article, we introduce the sidecar by using a mirrored data node and of course arbiter.

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: simple
spec:
  licenseKeySecret:
    #; to activate ISC license key
    name: iris-key-secret
  configSource:
    #; contains CSP-merge.ini, which is merged into IKO's
    #; auto-generated configuration.
    name: iris-cpf
  imagePullSecrets:
    - name: intersystems-pull-secret

  topology:
    data:
      image: containers.intersystems.com/intersystems/irishealth:2023.3
      compatibilityVersion: "2023.3"
      mirrored: true
      webgateway:
        image: containers.intersystems.com/intersystems/webgateway:2023.3
        type: apache
        replicas: 1
        applicationPaths:
          - /csp/sys
          - /csp/healthshare
          - /api/atelier
          - /csp/broker
          - /isc
          - /oauth2
          - /ui
        loginSecret:
           name: iris-webgateway-secret
           
    arbiter:
     image: containers.intersystems.com/intersystems/arbiter:2023.3
    webgateway:
      replicas: 1
      image: containers.intersystems.com/intersystems/webgateway:2023.3
      applicationPaths:
        #; All of the IRIS instance's system default applications.
        #; For Management Portal only, just use '/csp/sys'.
        #; To support other applications, please add them to this list.
        - /csp/sys
        - /csp/broker
        - /api
        - /isc
        - /oauth2
        - /ui
        - /csp/healthshare
      alternativeServers: LoadBalancing
      loginSecret:
        name: iris-webgateway-secret

  serviceTemplate:
    # ; to enable external IP addresses
    spec:
      type: LoadBalancer

 

Notice how the sidecar is nearly identical to the 'maincar' webgateway. It just is placed within the data node. That's because it's a second container that sits in the pod alongside the IRIS image. This all sounds great. But how do we actually access it? The IKO nicely creates services for us, but for the sidecar that responsibility will fall on us.

So how do we expose this webgateway? With a service like this:

apiVersion: v1
kind: Service
metadata:
  name: sidecar-service
spec:
  ports:
  - name: http
    port: 81
    protocol: TCP
    targetPort: 80
  selector:
    intersystems.com/component: data
    intersystems.com/kind: IrisCluster
    intersystems.com/mirrorRole: backup
    intersystems.com/name: simple
    intersystems.com/role: iris
  type: LoadBalancer

Now our 'maincar' service always points at the primary and the sidecar at the backup. But we very well could have created a sidecar service to expose data-0-0 and one to expose data-0-1, regardless of which is the primary or backup. Services give the possibility of exposing any pod we want and targeting it by what you notice is the selector, which just identifies a pod (or multiple pods) by its labels.

We've barely scratched the surface on services and haven't even mentioned their more sophisticated partner, ingress. You can read up more about that here in the meantime.

In the next bite sized article we'll cover the storage class.

1 Comment
讨论 (1)1
登录或注册以继续
文章
· 三月 4, 2024 阅读大约需 8 分钟

InterSystems IRIS® CloudSQL Metrics to Google Cloud Monitoring

If you are a customer of the new InterSystems IRIS® Cloud SQL and InterSystems IRIS® Cloud IntegratedML® cloud offerings and want access to the metrics of your deployments and send them to your own Observability platform, here is a quick and dirty way to get it done by sending the metrics to Google Cloud Platform Monitoring (formerly StackDriver).

The Cloud portal does contain a representation of some top level metrics for at-a-glance heads up metrics, which is powered by a metrics endpoint that is exposed to you, but without some inspection you would not know it was there. 

🚩 This approach is most likely taking advantage of a "to be named feature", so with that being said, it is not future-proof and definitely not supported by InterSystems.


So what if you wanted a more comprehensive set exported? This technical article/example shows a technique to scrape and forward metrics to observability, it can be modified to suit your needs, to scrape ANY metrics target and send to ANY observability platform using the Open Telemetry Collector.

The mechanics leading up to the above result can be accomplished in many ways, but for here we are standing up a Kubernetes pod to run a python script in one container, and Otel in another to pull and push the metrics... definitely a choose your own adventure, but for this example and article k8s is the actor pulling this off with Python.

Steps:

  • Prereqs
  • Python
  • Container
  • Kubernetes
  • Google Cloud Monitoring

Prerequisites:

  • An active subscription to IRIS®  Cloud SQL
  • One Deployment, running, optionally with Integrated ML
  • Secrets to supply to your environment 

Environment Variables

 
 Obtain Secrets

Python:

Here is the python hackery to pull the metrics from the Cloud Portal and export them locally as metrics for the otel collector to scrape:

 
iris_cloudsql_exporter.py

Docker:

 
Dockerfile


Deployment:

k8s; Create us a namespace:

kubectl create ns iris

k8s; Add the secret:

kubectl create secret generic iris-cloudsql -n iris \
    --from-literal=user=$IRIS_CLOUDSQL_USER \
    --from-literal=pass=$IRIS_CLOUDSQL_PASS \
    --from-literal=clientid=$IRIS_CLOUDSQL_CLIENTID \
    --from-literal=api=$IRIS_CLOUDSQL_API \
    --from-literal=deploymentid=$IRIS_CLOUDSQL_DEPLOYMENTID \
    --from-literal=userpoolid=$IRIS_CLOUDSQL_USERPOOLID

otel, Create Config:

apiVersion: v1
data:
  config.yaml: |
    receivers:
      prometheus:
        config:
          scrape_configs:
          - job_name: 'IRIS CloudSQL'
              # Override the global default and scrape targets from this job every 5 seconds.
            scrape_interval: 30s
            scrape_timeout: 30s
            static_configs:
                    - targets: ['192.168.1.96:5000']
            metrics_path: /

    exporters:
      googlemanagedprometheus:
        project: "pidtoo-fhir"
    service:
      pipelines:
        metrics:
          receivers: [prometheus]
          exporters: [googlemanagedprometheus]
kind: ConfigMap
metadata:
  name: otel-config
  namespace: iris

k8s; Load the otel config as a configmap:

kubectl -n iris create configmap otel-config --from-file config.yaml

k8s; deploy load balancer (definitely optional), MetalLB.  I do this to scrape and inspect from outside of the cluster.

cat <<EOF | kubectl apply -f -n iris -
apiVersion: v1
kind: Service
metadata:
  name: iris-cloudsql-exporter-service
spec:
  selector:
    app: iris-cloudsql-exporter
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 5000
    targetPort: 8000
EOF

gcp; need the keys to google cloud, the service account needs to be scoped 

  • roles/monitoring.metricWriter
kubectl -n iris create secret generic gmp-test-sa --from-file=key.json=key.json

k8s; the deployment/pod itself, two containers:

 
deployment.yaml
kubectl -n iris apply -f deployment.yaml

Running

Assuming nothing is amiss, lets peruse the namespace and see how we are doing.

✔ 2 config maps, one for GCP, one for otel

 

✔ 1 load balancer

 

✔ 1 pod, 2 containers successful scrapes

   

Google Cloud Monitoring

Inspect observability to see if the metrics are arriving ok and be awesome in observability!

 

1 Comment
讨论 (1)2
登录或注册以继续