发布新帖

查找

公告
· 五月 18, 2024

[Video] IRIS AI Studio: A detailed Functionality & Code Walkthrough

Hi Community,

This is a detailed, candid walkthrough of the IRIS AI Studio platform. I speak out loud on my thoughts while trying different examples, some of which fail to deliver expected results -  which I believe is a need for such a platform to explore different models, configurations and limitations. This will be helpful if you're interested in how to build 'Chat with PDF' or data recommendation systems using IRIS DB and LLM models.

2 Comments
讨论 (2)0
登录或注册以继续
问题
· 五月 18, 2024

Confusion regarding order status

Hi. I am struggling to understand the meaning of different elements of an order in my hospital's EHR.

Page: EPR > All Orders

Question: What is the difference between Start Date and Date Executed and Order End Date? If an order was started on Monday and executeed on Tuesday.. does this mean the patient received the order on Monday or Tuesday? and what is End date?

Also, how do I interpret order status? What is the difference between discontinued, verified, and executed?

Also, some  orders have a green / red / yellow bar to their left, that continues as a line under that row. What does that mean?

Thanks in advance.

2 Comments
讨论 (2)1
登录或注册以继续
文章
· 五月 18, 2024 阅读大约需 3 分钟

Enhancing Preventive Health Engagement: The Backend Powering ChatIRIS with InterSystems IRIS

ChatIRIS Health Coach, a GPT-4 based agent that leverages the Health Belief Model as a psychological framework to craft empathetic replies. This article elaborates on the backend architecture and its components, focusing on how InterSystems IRIS supports the system's functionality.

讨论 (0)1
登录或注册以继续
文章
· 五月 18, 2024 阅读大约需 3 分钟

IRIS AI Studio: Playground to explore RAG capabilities on top of IRIS DB vector embeddings

In the previous article, we saw in detail about Connectors, that let user upload their file and get it converted into embeddings and store it to IRIS DB. In this article, we'll explore different retrieval options that IRIS AI Studio offers - Semantic Search, Chat, Recommender and Similarity. 

New Updates  ⛴️ 

  • Added installation through Docker. Run `./build.sh` after cloning to get the application & IRIS instance running in your local
  • Connect via InterSystems Extension in vsCode - Thanks to @Evgeny Shvarov 
  • Added FAQ's in the home page that covers the basic info for new users

Semantic Search is a data retrieval method that aims to understand the user's intent and the context of the query rather than just matching keywords. It considers the relationship between words and concepts, going beyond simple keyword matching.

Building upon the index creation and data loading into the IRIS DB covered in the previous article's Connectors code, we now start with the index and run the query engine:

Ref. to `query_llama.py`

query_engine = index.as_query_engine()
response = query_engine.query(query)
Python
Python

 

Chat Engine

The Chat Engine augments the loaded content and acts as a stateful version of the previous query_engine. It keeps track of the conversation history and can answer questions with past context in mind.

Ref. to `chat_llama.py`

chat_engine = index.as_chat_engine(chat_mode='condense_question')
response = chat_engine.chat(user_message)
Python
Python

 

Recommender 

The Recommender is similar to the recommendation systems used on e-commerce sites, where similar products are displayed below the product we look for. The Recommender and Similarity RAG falls in similar lines, but in the Recommender, you can choose the LLM ranking model to be used.

Ref. to `reco_llama.py`

if reco_type == 'cohere_rerank':
    cohere_rerank = CohereRerank(api_key=cohere_api_key, top_n=results_count)
    query_engine = index.as_query_engine(
        similarity_top_k=10,
        node_postprocessors=[cohere_rerank],
    )
    response = query_engine.query(query)

elif reco_type == 'openai_rerank':
    rankgpt_rerank = RankGPTRerank(
        llm=OpenAI(
            model="gpt-3.5-turbo-16k",
            temperature=0.0,
            api_key=openai_api_key,
        ),
        top_n=results_count,
        verbose=True,
    )
    query_engine = index.as_query_engine(
        similarity_top_k=10,
        node_postprocessors=[rankgpt_rerank],
    )
    response = query_engine.query(query)
Python
Python

Similarity

The Similarity feature returns a similarity score, which helps evaluate the quality of a question-answering system via semantic similarity. You can filter the results based on the minimum similarity score and the number of similar items to retrieve from the DB.

Ref. to `similarity_llama.py`

retriever = index.as_retriever(similarity_top_k=top_k)
nodes = retriever.retrieve(query)
Python
Python

 

By leveraging these different retrieval options at IRIS AI Studio, including Semantic Search, Chat Engine, Recommender, and Similarity, users can explore the potential of their data. These features enable advanced information retrieval, contextual question answering, personalized recommendations, and semantic similarity analysis, empowering users to derive valuable insights and make data-driven decisions (at least to build similar ones in their domain).

🚀 Vote for this application in Vector Search, GenAI and ML contest, if you find it promising!

Feel free to share any feedback or inputs you may have. Thank you.

讨论 (0)0
登录或注册以继续
文章
· 五月 17, 2024 阅读大约需 4 分钟

IRIS SIEM System Integration with Crowdstrike Logscale

IRIS makes SIEM systems integration simple with Structured Logging and Pipes!

Adding a SIEM integration to InterSystems IRIS for "Audit Database Events" was dead simple with the Community Edition of CrowdStrike's Falcon LogScale, and here's how I got it done.  

CrowdStrike Community LogScale Setup

Getting Started was ridiculously straight forward and I had the account approved in a couple of days with the following disclaimer:

Falcon LogScale Community is a free service providing you with up to 16 GB/day of data ingest, up to 5 users, and 7 day data retention, if you exceed the limitations, you’ll be asked to upgrade to a paid offering. You can use Falcon LogScale under the limitations as long as you want, provided, that we can modify or terminate the Community program at any time without notice or liability of any kind.

Pretty generous and a good fit for this implementation, with the caveat all good things can come to an end I guess, cut your self an ingestion token in the UI and save it to your favorite hiding place for secrets.

Python Interceptor - irislogd2crwd.py

Wont go over this amazing piece of software engineering in detail, but it is as simple as a python implementation that accepts STDIN, breaks up what it sees into events, and ships them off to the SIEM platform to be ingested.

#!/usr/bin/env python
import json
import time
import os
import sys
import requests
import socket
from datetime import datetime
from humiolib.HumioClient import HumioIngestClient


input_list = sys.stdin.read().splitlines() # From ^LOGDMN Pipe!
for irisevent in input_list:
    # Required for CRWD Data Source
    today = datetime.now()
    fqdn = socket.getfqdn()

    payload = [
        {
            "tags": {
                "host": fqdn,
                "source": "irislogd"
            },
                "events": [
                {
                    "timestamp": today.isoformat(sep='T',timespec='auto') + "Z",
                    "attributes": {"irislogd":json.loads(irisevent)} 
                }
            ]
        }
    ]

    client = HumioIngestClient(
        base_url= "https://cloud.community.humio.com",
        ingest_token= os.environ["CRWD_LOGSCALE_APIKEY"]
    )
    ingest_response = client.ingest_json_data(payload)

    
Python
Python

You will want to chmod +x this script and put it where irisowner can enjoy it.



InterSystems IRIS Structured Logging Setup
Structured Logging in IRIS is documented to the 9's, so this will be a Cliff Note to the end state of configuring ^LOGDMN.  The thing that caught my attention in the docs is probably the most unclear part of the implementation, but the most powerful and fun for sure.

After:
ENABLING the Log Daemon, CONFIGURING the Log Daemon and STARTING Logging your configuration should look like this:
 

%SYS>Do ^LOGDMN
1) Enable logging
2) Disable logging
3) Display configuration
4) Edit configuration
5) Set default configuration
6) Display logging status
7) Start logging
8) Stop logging
9) Restart logging

LOGDMN option? 3
LOGDMN configuration

Minimum level: -1 (DEBUG)
 Pipe command: /tmp/irislogd2crwd.py
       Format: JSON
     Interval: 5
ObjectScript
ObjectScript
/tmp/irislogd2crwd.py  # Location of our chmod +x Python Interceptor
JSON                   # Important

Now that we are logging somewhere else, lets just pump up the verbosity in the Audit Log and enable all the events since somebody else is paying for it.

Stealing from @Sylvain Guilbaud 's post:



CrowdStrike LogScale Event Processing

It wont take long to get the hang of, but the Search Console is the beginning of all good things with setting up customized observability based on your events.  The search pane with filter criteria displays in the left corner, the available attributes on the left sidebar and the matching events in the results pane in the main view.

LogScale uses The LogScale Query Language (LQL to back the widgets, alerts and actions.

I suck at visualizations, so I am sure you could do better than below with a box of crayons, but here is my 4 widgets of glory to put a clown suit on the SIEM events for this post:

If we look under the hood for the "Event Types" widget, the following LQL is only needed behind a time series graph lql:

timechart(irislogd.event)
SQL
SQL


So we did the thing!

We've integrated IRIS with the Enterprise SIEM implementation and the Security Team is "😀 "  

The bonus here are the things that are also accomplished with the exact same development pattern as above:

  • Notifications
  • Actions
  • Scheduled Searches
  • Scheduled Daily Reports
2 Comments
讨论 (2)3
登录或注册以继续