发布新帖

Rechercher

问题
· 二月 14, 2023

Is there a way to export everything related to a production?

Hi is there a simple way to export everything related to a production and import in another instance?

For example exporting a production saving db, ns, mapping, webapp related, resources, roles and so on

3 Comments
讨论 (3)4
登录或注册以继续
问题
· 二月 14, 2023

Athena IDX IRIS Tables or technical Guide?

Good Morning All!

We would like to add our custom reporting with the IRIS database which is running on IDX at client server. Would need a technical documents which allow us to review the tables and its definitions. I tried to install ODBC but it not showing the schema of all tables.

Please suggest where from I can get the technical guide with schema information for next steps.

Thanks
Laxmi Lal Menaria

3 Comments
讨论 (3)1
登录或注册以继续
文章
· 二月 13, 2023 阅读大约需 4 分钟

When to use Columnar Storage

With InterSystems IRIS 2022.2, we introduced Columnar Storage as a new option for persisting your IRIS SQL tables that can boost your analytical queries by an order of magnitude. The capability is marked as experimental in 2022.2 and 2022.3, but will "graduate" to a fully supported production capability in the upcoming 2023.1 release. 

The product documentation and this introductory video, already describe the differences between row storage, still the default on IRIS and used throughout our customer base, and columnar table storage and provide high-level guidance on choosing the appropriate storage layout for your use case. In this article, we'll elaborate on this subject and share some recommendations based on industry-practice modelling principles, internal testing, and feedback from Early Access Program participants. 

Generally, our guidance on choosing an appropriate table layout for your IRIS SQL schema is as follows:

  1. If you’re deploying an application that leverages IRIS SQL or Objects, such as an EHR, ERP or transaction processing application, there is no need to change its current row storage layout to a columnar one. Most SQL queries issued for end user applications or programmatic transactions only retrieve or update a limited number of rows, and result rows usually correspond to table rows, with very limited use of aggregate functions. In such cases, the benefits offered by columnar storage and vectorized query processing don’t apply.  
  2. If such an application also embeds operational analytics, consider adding columnar indices if the corresponding analytical queries’ current performance is not satisfactory. This includes, for example, dashboards showing the current inventory or basic financial reporting on live data. Look for numeric fields used in aggregations (e.g. quantities, currencies) or high-cardinality fields used in range conditions (e.g. timestamps). A good indicator for such opportunities is current use of bitmap indices to speed up the filtering of large numbers of rows, usually on low-cardinality fields (e.g. categorical or ordinal fields). There is no need to replace these bitmap indices; the additional columnar indices work well in conjunction with them and are meant to avoid excessive reads from the master map or regular index maps (single gref per row).  
  3. If your IRIS SQL tables contain less than a million rows, there is no need to consider columnar storage. We prefer not to pin ourselves to specific numbers, but the benefits of vectorized query processing are unlikely to make a difference in these low ranges.  
  4. If you’re deploying an IRIS SQL schema for Data Warehouse, Business Intelligence, or similar analytical use cases, consider changing it to default to columnar storage. Star schemas, snowflake schemas or other denormalized table structures as well as broad use of bitmap indices and batch ingestion are good indicators for these use cases. Analytical queries that will benefit most from columnar storage are those that scan large numbers of rows and aggregate values across them. When defining a “columnar table”, IRIS will transparently resort to a row layout for columns in that table that aren’t a good fit for columnar storage, such as streams, long strings or serial fields. IRIS SQL fully supports such mixed table layouts and will use vectorized query processing for eligible parts of the query plan. The added value of bitmap indices on columnar tables is limited, so they can be left out.

Mileage will vary based on both environmental and data-related parameters. Therefore, we highly recommend customers test the different layouts in a representative setup. Columnar indices are easy to add to a regular row-organized table and will quickly yield a realistic perspective on query performance benefits. This, along with the flexibility of mixed table layouts, is a key differentiator of InterSystems IRIS that helps customers achieve an order-of-magnitude performance improvement.

We intend to make these recommendations more concrete as we get more real-world experience on the full production release. Obviously, we can provide more concrete advice based on customers’ actual schema and workload through the Early Access Program and POC engagements, and look forward to feedback from customers and community members. Columnar Storage is part of the InterSystems IRIS Advanced Server license and also enabled in the Community Edition of InterSystems IRIS and IRIS for Health. For a fully scripted demo environment, please refer to this GitHub repository.

2 Comments
讨论 (2)2
登录或注册以继续
文章
· 二月 12, 2023 阅读大约需 3 分钟

Enabling IRIS Interoperability Source Control with InterSystems Package Manager and git-source-control

Hi Developers!

As you know InterSystems IRIS Interoperability solutions contain different elements of the solution, such as: production, business rule, business process, data transformation, record mapper. And sometimes we can create and modify these elements with UI tools.  And of course we need a handy and robust way to source-control the changes made with UI tools.

For a long time this was a manual (export class, element, global, etc) or cumbersome settings procedure, so the saved time with source-control UI automation was competing with lost time to setup and maintain the settings.

Now the problem doesn't exist any more. With two approaches: package first development and usage of IPM package git-source-control by @Timothy Leavitt 
.

Meme Creator - Funny WOW IT REALLY WORKS Meme Generator at MemeCreator.org!

The details are below!

Disclaimer: this relates to a client-side approach of development, when the elements of the Interoperability production are the files in the repository.

So, this article will not be long at all, as the solution is fantastically simple.

I suppose you develop with docker and once you build the dev environment docker image with IRIS you load the solution as an IPM module. This is called "Package first" development and there is the related video and article. The basic idea is that dev-environment docker image with iris gets the solution loaded as package, as it is being deployed on a client's server.

To make a package first dev environment for your solution add a module.xml into the repository, describe all the elements of it and call "zpm load "repository/folder" command at a building phase of docker image.

I can demonstrate the idea with the example template: IRIS Interoperability template and its module.xml. Here is how the package is being loaded during docker build:

zpm "load /home/irisowner/irisdev/ -v":1:1

the source. 

See the following two lines placed before loading the package source control. Because of it source control starts working automatically for ALL the interoperability elements in the package and will export it in a proper folders in a proper format:

zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")

the source

How is it possible?

Since recently git-source-control app supports IPM pakcages for source control that are loaded in a dev mode. It reads the folder to export, and imports the structure of sources from module.xml. @Timothy Leavitt can give provide more details.

If we check in terminal the list of IPM modules after the environment is built we can see that loaded module is indeed in dev mode:

USER>zpm
=============================================================================
|| Welcome to the Package Manager Shell (ZPM).                             ||
|| Enter q/quit to exit the shell. Enter ?/help to view available commands ||
=============================================================================
zpm:USER>list
git-source-control      2.1.0
interoperability-sample 0.1.2 (DeveloperMode)
sslclient               1.0.1
zpm:USER>

Let's try? 

I cloned this repository, opened in VSCode and built the image. And below I test Interoperability UI and source control. I make a change in UI and it immediately appear in the sources and diffs:

It works! That's it! 

As a conclusion, what is needed to let you have source control for Interoperability UI elements in your project:

1. Add two lines in iris.script while building docker image:

zpm "install git-source-control"
do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension")

And load your solution as a module after that, e.g. like here:

zpm "load /home/irisowner/irisdev/ -v":1:1

2. Or you can start a new one by creating repository from Interoperability template.

Thanks for reading! Comments and feedback are welcome!

8 Comments
讨论 (8)3
登录或注册以继续
文章
· 二月 12, 2023 阅读大约需 6 分钟

How to add Api-Key validation in REST requests

 

Hi! recently I have to apply api-key validation to a web app with a lot of endpoints and I'm going to tell you how I did it in a centralized way.

 

I'm going to explain you how we can apply in a generic way (or not) api-key validation to all the endpoints of our web app.

 

For this feature I take as a template the class Base.cls of this repository iris-rest-api-template

I modified a bit this class to be able to check api-key security. The idea is that in your features you copy this class in your projects and you extend it for your own implementations.

 

First I did was add a new Parameter to the class called ApiKey and initialised by empty.

Parameter ApiKey = "";

 

Next I modified the OnPreDispatch method. This method is called before process the request and I found the perfect place for add api-key validation.

 

In this method I add a call to a new method that I made called ValidateApiKey.

 

I implemented the ValidateApiKey method like this: 

ClassMethod ValidateApiKey(pUrl As %String, pMethod As %String, ByRef pContinue As %Boolean)  As %Status
{
  SET tSC = $$$OK
  QUIT:(..MustCheckApiKey(pUrl, pMethod)=0) tSC

  IF (..#ApiKey '= "")
    {
      IF ($D(%request.CgiEnvs("HTTP_API_KEY"))'=0)
      {

        SET apiKey=%request.CgiEnvs("HTTP_API_KEY")
        IF (apiKey'=..#ApiKey)
        {
          SET pContinue = 0
          DO ..ReportHttpStatusCode(..#HTTP401UNAUTHORIZED)
          SET tSC=$$$ERROR(..#HTTP401UNAUTHORIZED)
         
        }

      }
    }
  Quit tSC
}

 

In this method first t

In this method, the first thing we do is check whether or not we should validate the apiKey for this request in the MustCheckApiKey method. (We could have a public part of our api that we don't want to protect by apiKey)

I have created this method so that we can override it if we do not want it to apply to all requests since by default it always returns True

ClassMethod MustCheckApiKey(pUrl As %String, pMethod As %String) As %Boolean
{
  Quit 1
}

As you can see, it receives pUrl and pMethod:

pUrl contains the address to which the request is being made, example:

https://www.myApp.com/requestX

 

pMethod contains the request verb: GET, POST, PUT...

 

With this information we could filter to indicate if we want to apply apiKey validation or not.

 

Example: 

Overriding the MustCheckcApiKey method in our extended class we could made that if the request is a Get to Home, do not apply api Key validation

ClassMethod MustCheckApiKey(pUrl As %String, pMethod As %String) As %Boolean
{
  SET res = 1
  If (pUrl="/home")&(pMethod="GET")
  {
      Set res=0
  }

  Quit res
}

 

Next we validate if we have established a value for the ApiKey parameter in our class and if so, then we check that the value of the "api-key" property that will come in the request header matches the value of our ApiKey parameter.

 

(The value must be passed in the header as "api-key" or "api_key" either would do)

 

If so, it will return the information and otherwise it will return an "Unauthorized 401" error.

 

I give you an example of use that I have made on my project cos-url-shortener

This project is an example of how to make a URL shortener with an IRIS docker.

 

For this case, we may want to protect the short URL generation endpoint with an apiKey because we only want our clients to use it, but when a user tries to access a shortened link, it would not make sense to apply an api-key validation.

 

How would it be done?

First we make sure that our class extends the Base.cls class.

Class AQS.urlShortener.UrlREST Extends urlShortener.REST.Base

 

Then we overwrite the value of the ApiKey parameter:

Parameter ApiKey = "myRandomApiKeyValue";

 

Now if we do NOT want api-key security to be applied to all endpoints, we overwrite the MustCheckApiKey method, in my case it coincides that the GET requests are public and the POST requests are private, so I have overwritten it like this:

ClassMethod MustCheckApiKey(pUrl As %String, pMethod As %String) As %Boolean
{
  SET res = 1
  If (pMethod="GET")
  {
      Set res=0
  }

  Quit res
}

 

And ready! We already have our endpoints protected by an api-key.

I leave you some screenshots of what a blocked request and a correct one would look like:

 

Correct api-key:

 

Incorrect api-key or non-existent:

 

And here is a test of a GET request that without api-key works ok:

 

Complete Base.cls file (in case you want to copy and paste)

 
Base.cls

 

 

I hope it can be useful to you in your projects.

 

Now we can sit down to enjoy our new security:

 

 

Thanks for reading me!

讨论 (0)1
登录或注册以继续