查找

文章
· 四月 4 阅读大约需 24 分钟

Part IV: Studying for IRIS Developer Professional Certification with the "Ask Developer Community AI"

The "Ask Developer Community AI" tool is an excellent resource for studying for the certification. I asked it about each topic that will be covered in the test and the results are below.
Note: I classified each answer by the assertiveness that I consider as good, average and bad.

Note 2: The article has 4 parts, each one for an exam area.

4. Best practices: Code

4.1 Writes defensive code


[GOOD]  Chooses strategy for error handling  

Choosing a strategy for error handling in InterSystems can be approached in several ways depending on the specific requirements and context of your application.
1.    Extending EnsLib.HL7.Operation.TCPOperation: If you need to handle errors based on specific content within HL7 messages, you can extend the EnsLib.HL7.Operation.TCPOperation class and override the OnGetReplyAction method. This allows you to customize the error handling logic by checking the content of the error messages and deciding the appropriate action based on a lookup table. This approach provides granular control over error handling and makes it accessible to the wider team by using a lookup table for error actions [1].
2.    Traditional Error Processing: For legacy applications, traditional error processing using $ZTRAP and $ETRAP can be used. These methods allow you to set up error handlers that execute specific commands when an error occurs. $ZTRAP is used to specify a routine or label to handle errors, while $ETRAP can contain commands to handle errors within a specific context. However, these methods are less recommended for new applications due to their complexity and potential for errors [2].
3.    TRY-CATCH Mechanism: The recommended modern approach for error handling in InterSystems IRIS is the TRY-CATCH mechanism. This method allows you to define protected blocks of code (TRY blocks) and specify how to handle exceptions (CATCH blocks). This approach simplifies error detection and handling, separates error handling code from regular code, and supports error propagation and recovery [3][4].
4.    BPL Error Handling: For business processes using BPL (Business Process Language), you can use fault handlers within scopes to catch and handle errors. BPL provides elements like <scope>, <throw>, <catch>, and <catchall> to manage errors and specify how to recover from them [5].
Each of these methods has its own advantages and is suitable for different scenarios. For new applications, the TRY-CATCH mechanism is generally recommended due to its simplicity and robustness. For specific HL7 error handling, extending the EnsLib.HL7.Operation.TCPOperation class provides a tailored solution. For legacy systems, traditional error processing methods may still be in use.
References:
•    Extending EnsLib.HL7.Operation.TCPOperation for custom error handling [1]
•    Traditional Error Processing using $ZTRAP and $ETRAP [2]
•    Using TRY-CATCH for error handling [3][4]
•    Handling Errors in BPL [5]

Sources
1.Extending EnsLib.HL7.Operation.TCPOperation for custom error handling
2.(Legacy) Traditional Error Processing | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
3.Using TRY-CATCH | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
4.ObjectScript error management
5.Handling Errors in BPL | Developing BPL Processes | InterSystems IRIS Data Platform 2024.3

[GOOD]  Diagnoses and troubleshoots system performance and code execution performance
To diagnose and troubleshoot system performance and code execution performance, you can use several tools and methods available in InterSystems IRIS:
1.    ^PERFSAMPLE: This tool samples the state of processes at high frequency and sorts the data along various dimensions such as the process’s current routine, namespace, state string, and wait state within the database kernel. It helps in identifying what processes are spending their time doing, whether they are using CPU, reading database blocks, or waiting for other resources. [1]
2.    ^PERFMON and ^%SYS.MONLBL: These utilities help identify the busiest routines and analyze them in detail. ^PERFMON counts the number of accesses to globals and routines, while ^%SYS.MONLBL provides detailed statistics on the number of times each line in a routine was executed, the number of global references, and the time spent at each line. This approach is useful for pinpointing performance issues in specific pieces of code. [2]
3.    ^SystemPerformance: This utility collects detailed performance data about an InterSystems IRIS instance and the platform on which it is running. It generates performance reports that can help diagnose system problems. The utility can be run in the terminal or scheduled in the Management Portal. [3]
These tools provide comprehensive insights into system performance and can help you identify and resolve performance bottlenecks effectively.

Sources
1."What's taking so long?" - Process Sampling for Performance Analysis
2.APM - Finding the expensive parts of your application
3.Monitoring Performance Using ^SystemPerformance | Monitoring Guide | InterSystems IRIS Data Platform 2024.3

[GOOD]  Manages and monitors process memory
To manage and monitor process memory in InterSystems IRIS, you can use several tools and techniques:
1.    Monitor Memory Utilization: You can monitor the memory utilization of your system, including both the percentage of available memory and the percentage of available swap space. This can be done using the InterSystems IRIS Health Monitor or a third-party monitoring tool [1].
2.    Managing Process Memory: Processes in InterSystems IRIS use memory for various entities such as variables, object instances, and I/O buffers. Memory is allocated from a pool and returned to the pool when no longer needed. The system variable $ZSTORAGE can be used to set a limit on memory consumption for each process, with a maximum of 2TB. This value can be set via the Management Portal or the configuration file (iris.cpf) [2].
3.    Monitoring for Memory Swapping: It is important to monitor if memory swapping is occurring in your system. If swapping occurs, an administrator should be notified. This can be done using a third-party monitoring tool [3].
4.    Monitoring System Dashboard Indicators: The System Dashboard in the Management Portal provides various performance indicators, including memory usage. You can navigate to more detailed information from the dashboard [4].
5.    Evaluate Maximum Memory per Process Size: Periodically evaluate the maximum memory per process and decide whether to increase its size based on the system’s growth. This involves checking the bbsiz parameter and adjusting it as needed [5].
For detailed instructions on each of these tasks, please refer to the respective sections in the InterSystems documentation.

Sources
1.Monitor Memory Utilization | Monitoring and Operations Guide | InterSystems IRIS Data Platform 2024.3
2.Process Memory in InterSystems Products | InterSystems IRIS Data Platform 2024.3
3.Monitor for Memory Swapping | Monitoring and Operations Guide | InterSystems IRIS Data Platform 2024.3
4.Monitoring InterSystems IRIS Using the Management Portal | Monitoring Guide | InterSystems IRIS Data Platform 2024.3
5.Evaluate Maximum Memory per Process Size | Monitoring and Operations Guide | InterSystems IRIS Data Platform 2024.3

[AVERAGE]  Manages processes (including background processes)

To manage processes, including background processes, in InterSystems IRIS, you can use the JOB command to initiate a background process. The process issuing the JOB command is known as the parent process, and the initiated background process is known as the child process. This background process may also be referred to as a “jobbed process” or a “spawned process” [1].
You can manage and control these processes using the Management Portal. The available functions include displaying process information, displaying process details, suspending/resuming a process, terminating a process, displaying process variables, and broadcasting messages to terminals [2].
For more detailed instructions on managing processes, including suspending, resuming, and terminating processes, please refer to the relevant sections in the Management Portal documentation [2].

Sources
1.Process Management | Specialized System Tools and Utilities | InterSystems IRIS Data Platform 2024.3
2.Controlling InterSystems IRIS Processes | System Administration Guide | InterSystems IRIS Data Platform 2024.3

[GOOD]  Describes general system limits in IRIS (e.g., max string vs stream, # of properties)
InterSystems IRIS has several system limits that you should be aware of:
1.    String Length Limit: The maximum length of a string is 3,641,144 characters. If a string contains only characters with codes from 0 to 255, each character takes up 8 bits. If it contains at least one character with a code greater than 255, each character takes up 16 bits [1].
2.    Subscript Limits:
o    For a local array, the maximum length of a subscript is 32,767 encoded bytes.
o    For a global array, the maximum length of a subscript is 511 encoded bytes.
o    For a process-private global, the maximum length of a subscript is 507 encoded bytes.
o    The maximum number of subscript levels is 255 for a local variable and 253 for a global or process-private global [1].
3.    Maximum Length of a Global Reference: The total length of a global reference is limited to 511 encoded characters [1].
4.    Class Limits:
o    Inheritance depth: 50
o    Foreign keys: 400 per class
o    Indexes: 400 per class
o    Methods: 2000 per class
o    Parameters: 1000 per class
o    Projections: 200 per class
o    Properties: 1000 per class
o    Queries: 200 per class
o    SQL constraints: 200 per class
o    Storage definitions: 10 per class
o    Superclasses: 127 per class
o    Triggers: 200 per class
o    XData blocks: 1000 per class [1].
5.    Class and Routine Limits:
o    Class method references: 32,768 unique references per routine or class
o    Class name references: 32,768 unique references per routine or class
o    Instance method references: 32,768 per routine or class
o    Lines: 65,535 lines per routine, including comment lines
o    Literals (ASCII and Unicode): 65,535 per routine or class
o    Parameters: 255 per subroutine, method, or stored procedure
o    Procedures: 32,767 per routine
o    Property read references: 32,768 per routine or class
o    Property set references: 32,768 per routine or class
o    Routine references: 65,535 per routine or class
o    Target references: 65,535 per routine or class
o    TRY blocks: 65,535 per routine
o    Private variables: 32,763 per procedure
o    Public variables: 65,503 per routine or class [1].
For more detailed information, you can refer to the specific sections in the InterSystems IRIS documentation [1].

Sources
1.General System Limits | ObjectScript Reference | InterSystems IRIS Data Platform 2024.3

4.2 Writes secure code  

[GOOD] 
Implements database and data element encryption
InterSystems IRIS provides a suite of encryption technologies to protect data at rest, including both block-level database encryption and data-element encryption.
1.    Block-level Database Encryption: This encrypts the entire database, including data, indexes, bitmaps, pointers, allocation maps, and incremental backup maps. The encryption and decryption processes are optimized for performance and are transparent to users and applications. To create an encrypted database, you need to activate a database encryption key and then specify encryption when creating the database through the Management Portal [1][2][3][4].
2.    Data-element Encryption: This allows for encrypting specific data elements within an application, such as credit card numbers or social security numbers. It is implemented programmatically using an API, which provides methods for encrypting and decrypting individual data elements. This method is useful for applications that need to encrypt only certain fields and can be used alongside block-level encryption [1][2][5][6].
3.    Encryption Key Management: InterSystems IRIS provides tools for creating and managing encryption keys, which can be stored in key files or on key servers using the Key Management Interoperability Protocol (KMIP). These keys are essential for both database and data-element encryption [1][2][3][5].
For detailed steps on implementing these encryption methods, please refer to the respective sections in the documentation:
•    For block-level database encryption, see the “Create an Encrypted Database” section [4].
•    For data-element encryption, see the “Using Data-Element Encryption” section [5].

Sources
1.Encryption | InterSystems IRIS Data Platform 2024.3
2.Encryption | Security Reference (Contents) | InterSystems IRIS Data Platform 2024.3
3.Demo: Database Encryption | Security Reference | InterSystems IRIS Data Platform 2024.3
4.Using Encrypted Databases | Security Reference | InterSystems IRIS Data Platform 2024.3
5.Data-Element Encryption | Security Reference | InterSystems IRIS Data Platform 2024.3
6.Making encrypted datafields SQL-searchable

 

[GOOD]  Connects securely to external systems

InterSystems IRIS provides several options for securely connecting to external systems:

  •     HTTP Adapters: These enable productions to send and receive HTTP requests and responses. The port and interface IP addresses of the inbound HTTP adapter are configurable, and the server and port for the outbound HTTP adapter are provided by class settings [1].
  •     Java Gateway: This allows production adapters to communicate through a Java intermediary process. The InterSystems IRIS server process communicates with the Java process via a configurable TCP connection [1].
  •     LDAP: The EnsLib.LDAP.OutboundAdapter class can be used by Business Services to send requests to an LDAP server and receive responses [1].
  •     MQSeries: The EnsLib.MQSeries.InboundAdapter and EnsLib.MQSeries.OutboundAdapter enable productions to retrieve messages from and send messages to IBM WebSphere MQ message queues [1].
  •     Pipe: The EnsLib.Pipe.InboundAdapter and EnsLib.Pipe.OutboundAdapter enable productions to invoke operating system commands or shell scripts, communicating with them via a pipe [1].
  •     SAP: The Java Gateway is used to communicate with the SAP Java Connector using classes imported with the EnlLib.SAP.BootStrap class ImportSAP method [1].
  •     SQL: The SQL inbound and outbound adapters enable productions to communicate with JDBC or ODBC-compliant databases [1].
  •     TCP: InterSystems IRIS provides input and output TCP adapters for managing incoming or outgoing TCP connections [1].
  •     Telnet: The EnsLib.Telnet.OutboundAdapter permits outbound telnet connections to the telnet facility on another system [1].

For more detailed information on configuring these adapters and gateways, please refer to the relevant sections in the InterSystems IRIS documentation.


Sources
1.Secure InterSystems Processes and Operating-System Resources | Securing Your Instance | InterSystems IRIS Data Platform 2024.3

 

[GOOD]  InterSystems IRIS provides several options for securely connecting to external systems:
1.    HTTP Adapters: These enable productions to send and receive HTTP requests and responses. The port and interface IP addresses of the inbound HTTP adapter are configurable, and the server and port for the outbound HTTP adapter are provided by class settings [1].
2.    Java Gateway: This allows communication through a Java intermediary process. The InterSystems IRIS server process communicates with the Java process via a TCP connection, and the TCP ports used are configurable [1].
3.    LDAP: The EnsLib.LDAP.OutboundAdapter class can be used by Business Services to send requests to an LDAP server and receive responses [1].
4.    MQSeries: The EnsLib.MQSeries.InboundAdapter and EnsLib.MQSeries.OutboundAdapter enable productions to retrieve messages from and send messages to IBM WebSphere MQ message queues [1].
5.    Pipe: The EnsLib.Pipe.InboundAdapter and EnsLib.Pipe.OutboundAdapter enable productions to invoke operating system commands or shell scripts, creating a process external to InterSystems IRIS and communicating with it via a pipe [1].
6.    SAP: The Java Gateway is used to communicate with the SAP Java Connector using classes imported with the EnlLib.SAP.BootStrap class ImportSAP method [1].
7.    SQL: The SQL inbound and outbound adapters enable productions to communicate with JDBC or ODBC-compliant databases [1].
8.    TCP: InterSystems IRIS provides input and output TCP adapters for managing incoming or outgoing TCP connections [1].
9.    Telnet: The EnsLib.Telnet.OutboundAdapter permits outbound telnet connections to the telnet facility on another system [1].
For more detailed information on configuring these adapters and connections, please refer to the relevant sections in the InterSystems IRIS documentation.

Sources
1.Secure InterSystems Processes and Operating-System Resources | Securing Your Instance | InterSystems IRIS Data Platform 2024.3

[GOOD]  Prevents SQL injection attacks (e.g., sanitizing, concatenating vs parameterizing)  

To prevent SQL injection attacks in InterSystems IRIS, it is crucial to use parameterized queries instead of concatenating user inputs directly into SQL statements. Parameterized queries ensure that user inputs are treated strictly as data rather than executable code, significantly reducing the risk of SQL injection. For example, instead of constructing a query string by concatenating user inputs, use placeholders (?) and bind the user inputs as parameters:

Set query = "SELECT Name, Age FROM Patients WHERE Age > ?"
Set statement = ##class(%SQL.Statement).%New()
Set status = statement.%Prepare(query)
If status {
    Set result = statement.%Execute(age)
    While result.%Next() {
        Write "Name: ", result.Name, ", Age: ", result.Age, !
    }
}

This approach ensures that the age value is treated as data, not as part of the SQL command, thus preventing SQL injection [1].
Additionally, InterSystems IRIS provides several built-in measures to make SQL injection more difficult:
1.    It does not allow multiple SQL statements to be executed in a single request.
2.    It offers object access, which can be used to implement update processing, thereby preventing SQL injection through SQL statements.
3.    CSP (IRIS’s web technology) can encrypt all URL query strings, making it challenging to use techniques such as obtaining clues for injection from query strings.
4.    The IRIS security model allows granting database access to applications rather than users, and users can be restricted to read and write access to the database, with only permissions to run the application [2].

Sources
1.Avoiding SQL Injection in InterSystems IRIS: The Case for Secure Query Practices
2.Countermeasures against SQL injection

[AVERAGE]  Prevents remote code execution
To disable remote code execution in InterSystems IRIS, you can limit remote privileged access by disabling the handling of remote jobs. This can be done by setting the netjob parameter to false. This setting is true by default. For detailed instructions on changing this parameter, refer to the procedure in the documentation [1].

Sources
1.Tighten Security for an Instance | Securing Your Instance | InterSystems IRIS Data Platform 2024.3

[GOOD]  Leverages InterSystems IRIS security model

InterSystems IRIS employs a Role-Based Access Control (RBAC) model for its security framework. In this model, a system administrator assigns users to one or more roles, each of which is authorized to perform a specific set of activities with a particular set of resources. This helps manage and control access to various assets such as databases, services, applications, and administrative actions. The RBAC model ensures that users only have the permissions necessary to perform their tasks, enhancing security and simplifying privilege management [1][2][3].

Sources
1.About InterSystems Authorization | InterSystems IRIS Data Platform 2024.3
2.About InterSystems Authorization | Authorization Guide | InterSystems IRIS Data Platform 2024.3
3.InterSystems IRIS Security | Orientation Guide for Server-Side Programming | InterSystems IRIS Data Platform 2024.3

InterSystems IRIS leverages a comprehensive security model that includes several key features:
1.    System Security: Tools to protect and harden instance-level security.
2.    Auditing: Keeps a log of predefined system and application-specific events.
3.    Authentication: Verifies the identity of all users.
4.    Authorization: Ensures that users can access only the resources they need.
5.    Encryption: Protects stored data from unauthorized access.
6.    TLS: Creates protected channels for communications and data transmission.
InterSystems IRIS security is designed to interoperate with other security products and tools, such as firewalls and operating system security features, to create a secure computing environment [1]. Additionally, the security model includes role-based access control, which allows for fine-grained access control based on roles and permissions [2].

Sources
1.Introducing InterSystems Security | InterSystems IRIS Data Platform 2024.3
2.InterSystems IRIS Security | Orientation Guide for Server-Side Programming | InterSystems IRIS Data Platform 2024.3

4.3 Ensures data integrity  
 

[GOOD]  Differentiates between journaling behavior inside vs outside transactions

Journaling behavior in InterSystems IRIS differs inside and outside transactions.
1.    Inside Transactions: InterSystems IRIS automatically journals any global update that is part of a transaction, regardless of the global journal state setting for the database in which the affected global resides. This ensures that all changes within a transaction are recorded and can be rolled back if the transaction does not complete successfully. If a transaction is incomplete, IRIS uses the journal entries to roll back the transaction, returning the globals involved to their pre-transaction values. This rollback occurs during recovery after a system crash, when a process is halted, or when a process is terminated from the Management Portal [1].
2.    Outside Transactions: For non-transactional operations, journaling records all global update operations performed on a database. This allows the database to be restored to its state immediately before a failure or crash by applying the changes in the journal files after restoring from a backup. However, if a database is not journaled, updates to it will not be recorded in the journal, and thus, these updates cannot be recovered after a failure [1].
In summary, journaling ensures data integrity by recording all updates within transactions and providing a mechanism to roll back incomplete transactions, while also maintaining a log of all global updates for recovery purposes outside of transactions.

Sources
1.Journaling Overview | Data Integrity Guide | InterSystems IRIS Data Platform 2024.3

 

[GOOD]  Minimizes requirements for journal volumes and performance
To minimize the requirements for journal volumes and improve performance, consider the following strategies:
1.    File System Separation: Use separate file systems for installation files, database files, primary journal directory, and alternate journal directory. This separation helps in performance and recoverability by ensuring that damage to one file system does not affect the others [1].
2.    Buffered I/O vs. Direct I/O: Use direct or concurrent I/O for journal files to bypass the operating system cache, which can improve performance. For database files, use direct I/O as InterSystems IRIS uses its own database cache [1].
3.    Data Compression: Implement journal compression to reduce storage costs. This can offer significant storage reduction and is compatible with journal encryption. However, it may increase the overhead necessary for journal restores [1].
4.    Disabling Journaling for Certain Operations: Under specific circumstances, you can disable journaling for filing operations, such as object saves and deletes, by setting the concurrency value to 0 when opening an object or by suspending object filer transaction processing for the current process [2].
5.    Monitoring and Profiling: Regularly monitor and profile your journal files to identify which globals are causing rapid growth. Use tools like the Journal Profile utility to get a summary of the size of the file taken up by each global and investigate further to determine if the growth is natural or if it can be minimized [3].
6.    Journal Syncs: Ensure that the average write response time for journal syncs is less than or equal to 0.5 ms, with a maximum of 1 ms, to maintain scalability and performance. This is particularly important in ECP deployments where journal syncs are frequent [4].
By implementing these strategies, you can optimize the performance and minimize the requirements for journal volumes in your InterSystems IRIS environment.

Sources
1.System Resource Planning and Management | InterSystems IRIS Data Platform 2024.3
2.Special Considerations for Journaling | Data Integrity Guide | InterSystems IRIS Data Platform 2024.3
3.What is causing the journals to grow rapidly?
4.Data Platforms and Performance - Part 7 ECP for performance, scalability and availability

[GOOD]  Manages transactions  
To manage transactions in InterSystems IRIS, you can use either SQL or ObjectScript commands. Here are the key commands and their functions:
1.    SQL Commands:
o    START TRANSACTION: Begins a transaction.
o    COMMIT: Signals a successful end of the transaction.
o    ROLLBACK: Signals an unsuccessful end of the transaction, rolling back all changes made during the transaction.
o    SAVEPOINT: Marks a point within a transaction to which you can roll back.
2.    ObjectScript Commands:
o    TSTART: Marks the beginning of a transaction.
o    TCOMMIT: Commits the transaction.
o    TROLLBACK: Rolls back the transaction. You can use TROLLBACK 1 to roll back only the current level of nested transactions.
These commands ensure that all operations within a transaction are completed successfully or none at all, maintaining the integrity of the database. For more detailed instructions on managing transactions, refer to the relevant documentation for SQL and ObjectScript transaction processing [1][2][3].

Sources
1.Transaction Processing | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
2.START TRANSACTION (SQL) | InterSystems SQL Reference | InterSystems IRIS Data Platform 2024.3
3.TROLLBACK (ObjectScript) | ObjectScript Reference | InterSystems IRIS Data Platform 2024.3

 

[GOOD]  Enumerates causes for automatic transaction rollbacks
Automatic transaction rollbacks can occur due to several reasons:
1.    System Failure: If a system failure occurs, InterSystems IRIS automatically rolls back any incomplete transactions during the recovery process when the system restarts [1].
2.    Process Termination: When a process is terminated using the HALT command or the ^RESJOB utility, any transactions in progress are rolled back [1].
3.    Transaction Suspension: If transactions are suspended during a process, any changes made while transactions were suspended cannot be rolled back, but changes made before or after the suspension are rolled back [2].
4.    Error During Transaction: If an error occurs during a transaction, the transaction can be rolled back using the SQL ROLLBACK command or the ObjectScript TROLLBACK command [1][3].
These rollbacks ensure the logical integrity of the data by reverting the database to a consistent state before the transaction began.

Sources
1.Transaction Processing | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
2.ROLLBACK (SQL) | InterSystems SQL Reference | InterSystems IRIS Data Platform 2024.3
3.TROLLBACK (ObjectScript) | ObjectScript Reference | InterSystems IRIS Data Platform 2024.3

4.4 Implements concurrency controls  

[GOOD]  Describes functionality of locking mechanisms with respect to stateful and stateless applications

Locking mechanisms in InterSystems IRIS are crucial for managing concurrency control in both stateful and stateless applications. The basic locking mechanism is the LOCK command, which delays activity in one process until another process signals that it is okay to proceed. This ensures that different processes do not change the same data element simultaneously, preventing data corruption.
For stateful applications, locks are typically used to control access to application data stored in globals. For example, if an application needs to read or modify global nodes, it creates shared or exclusive locks to prevent other processes from modifying or reading these nodes during the operation. This ensures data consistency and integrity during the application’s stateful operations.
For stateless applications, locks can be used to prevent multiple processes from performing the same activity simultaneously. For instance, a routine that should not be run by more than one process at a time can create an exclusive lock on a specific global node. If the lock is acquired, the routine proceeds; otherwise, it quits with an error message indicating that the routine is already running.
Locks can be exclusive or shared, and they can be non-escalating or escalating. Exclusive locks prevent any other process from acquiring a lock with the same name, while shared locks allow multiple processes to hold non-conflicting locks on the same resource. Escalating locks help manage large numbers of locks by consolidating them into a single lock when a threshold is reached, reducing memory consumption and the chance of filling the lock table.
In summary, locks in InterSystems IRIS are used to control access to data resources, ensuring data integrity and preventing simultaneous conflicting operations in both stateful and stateless applications [1] [2].

Sources
1.Locking and Concurrency Control | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
2.LOCK (ObjectScript) | ObjectScript Reference | InterSystems IRIS Data Platform 2024.3

[GOOD]  Follows best practices when using locks

When using locks in InterSystems IRIS, it is important to follow best practices to ensure system stability and prevent issues such as deadlocks. Here are some key best practices:
1.    Identify and Terminate Processes: Rather than removing a lock directly, identify and terminate the process that created the lock. Removing a lock can have a severe impact on the system depending on the purpose of the lock [1].
2.    Avoid Deadlocks: To prevent deadlocks, always include a timeout argument when acquiring locks. Follow a strict protocol for the order in which you issue incremental LOCK commands, ensuring all processes follow the same order for lock names. Using simple locking rather than incremental locking can also help avoid deadlocks [2].
3.    Use Shared and Exclusive Locks Appropriately: Use shared locks when multiple processes need to view the same record, and exclusive locks when a process needs to modify a record. Shared locks prevent other processes from acquiring exclusive locks, and vice versa [3].
4.    Lock Arrays Correctly: When locking arrays, you can lock either the entire array or specific nodes. Locking a node prevents other processes from locking any subordinate nodes or direct ancestors of the locked node [3].
5.    Monitor and Manage Locks: Use the Management Portal to view and manage the lock table. This includes viewing current locks and, if necessary, removing locks. The lock table records all current locks and the processes that own them [2].
6.    Use the LOCK Command Properly: The LOCK command is used to add and remove locks. Incremental locks are added using LOCK +lockname, and removed using LOCK -lockname. You can also specify lock types and timeouts to control the behavior of locks [2][3].
For detailed instructions on using the LOCK command and managing locks, refer to the relevant sections in the documentation [2][3].

Sources
1.Locks (Tools/APIs) | InterSystems API Index | InterSystems IRIS Data Platform 2024.3
2.Locking and Concurrency Control | Using ObjectScript | InterSystems IRIS Data Platform 2024.3
3.Lock Command, continued | ObjectScript Tutorial | InterSystems IRIS Data Platform 2024.3

[GOOD]  Chooses between row locks and table locks
The choice between row locks and table locks in InterSystems SQL is managed by the LOCK_ESCALATION_THRESHOLD option. This option is set to an integer value that determines when to escalate from row-level locking to table-level locking. The default value is 1000. This means that if a process performs more than 1000 inserts, updates, or deletes on a single table within a single transaction, the system will attempt to acquire a table-level lock instead of continuing to lock individual rows. This helps to prevent the lock table from becoming too full [1].

Sources
1.SET OPTION (SQL) | InterSystems SQL Reference | InterSystems IRIS Data Platform 2024.3

讨论 (0)2
登录或注册以继续
讨论 (0)1
登录或注册以继续
问题
· 四月 4

OAuth2: recuperar access_token y evitar login si es válido

Tenemos un servidor OAuth configurado como proveedor de identidades, y tenemos una aplicación externa (de otro proveedor) que se conecta correctamente con el OAuth.

Por necesidades del proyecto, lo que queremos hacer es lo siguiente:

  • Si el usuario no está autenticado, mostrar la página de login del OAuth, que haga el login y que le redirija a la aplicación de terceros --> Esta parte funciona
  • Si el usuario ya está autenticado (ya ha iniciado sesión y ya tiene un access_token válido), que exista una cookie con el access_token generado en el login, y al entrar a la URL de la aplicación de terceros, en lugar de mostrar el login del OAuth, si el access_token es válido, redirigirle directamente a la aplicación de terceros --> Esta parte es la que no conseguimos hacer funcionar

Qué tenemos?

  • Hemos creado una clase custom "test.oauth.server.Authenticate" que extiende de %OAuth2.Server.Authenticate.
  • Hemos añadido el método BeforeAuthenticate. Aquí somos capaces de leer las cookies de la petición, encontrar la que hemos creado, obtener el access_token, validarlo y obtener el token como tal:
Include Ensemble

Class test.oauth.server.Authenticate Extends %OAuth2.Server.Authenticate
{

ClassMethod BeforeAuthenticate(scope As %ArrayOfDataTypes, properties As %OAuth2.Server.Properties) As %Status
{
    $$$LOGINFO("Entrando en BeforeAuthenticate")

	set currentNS = $Namespace
    Set httpRequest = %request
    Set tokenCookie = httpRequest.GetCookie("SessionToken")

    If tokenCookie '= "" {
        $$$LOGINFO("Token encontrado en Cookie: "_tokenCookie)
        
        // Llamar manualmente a GetAccessToken con el token de la cookie
        If ..GetAccessToken(tokenCookie) {
            Set isValid = ##class(%SYS.OAuth2.Validation).ValidateJWT("ValidarToken", tokenCookie, , , .jsonObject, .securityParameters, .sc)
            $$$LOGINFO(isValid_" ("_sc_"): "_$System.Status.GetErrorText(sc))
            $$$LOGINFO(jsonObject.%ToJSON())
            
            set $Namespace = "%SYS"
            Set token=##class(OAuth2.Server.AccessToken).OpenByToken(tokenCookie,.sc)
            set $Namespace = currentNS
            
            $$$LOGINFO(token_" ("_sc_"): "_$System.Status.GetErrorText(sc))
            
            Quit 1 // Continuar sin mostrar login
        } Else {
            $$$LOGINFO("GetAccessToken rechazó el token")
            Quit $$$OK
        }
    }

    $$$LOGINFO("No se encontró token en Cookie")
    Quit $$$OK
}

ClassMethod GetAccessToken(ByRef AccessToken As %String) As %Boolean
{
    $$$LOGINFO("Entrando en GetAccessToken")
    
    // Si ya recibimos un token desde BeforeAuthenticate
    If AccessToken '= "" {
        // Token recibido en GetAccessToken

        // Llamar a la función de validación de token
        Set sc = ##class(%SYS.OAuth2.Validation).ValidateJWT("ValidarToken", AccessToken, , , .jsonObject, .securityParameters)
        Set user = jsonObject.sub
        $$$LOGINFO("Token válido. Usuario: "_user)
        If user '= "" {
            $$$LOGINFO("Usuario autenticado: "_user)
            Quit $$$OK
        } Else {
            $$$LOGINFO("El usuario está vacío.")
            Quit 0  // Retorna 0 si el usuario es vacío
        }
    }
    
    Quit 0  // Asegúrate de retornar 0 si no se obtiene el token
}

}

Pero sea como sea, aunque tengamos el access_token, abramos el objeto Token del OAuth, etc, sigue mostrando el login. Creemos que nos falta algo, pero no sabemos el qué...

Qué podemos hacer? Alguna idea?

Gracias!

4 Comments
讨论 (4)1
登录或注册以继续
公告
· 四月 3

Global Masters : découvrez de nouvelles récompenses

Bonjour la Communauté !

Nous venons d'ajouter de nouvelles récompenses 🎁 sur Global Masters !

Prenez un moment pour explorer la section Récompenses. Récompenses printanières exceptionnelles : saisissez les vôtres dès maintenant !

Veuillez noter que notre sélection de prix varie selon la région et le pays. Nous vous invitons à consulter les options disponibles selon votre localisation.

 

Vous n'êtes pas encore membre de Global Masters ? Inscrivez-vous ici en utilisant vos identifiants InterSystems SSO.

讨论 (0)1
登录或注册以继续
文章
· 四月 3 阅读大约需 23 分钟

Part I: Studying for IRIS Developer Professional Certification with the "Ask Developer Community AI"

The "Ask Developer Community AI" tool is an excellent resource for studying for the certification. I asked it about each topic that will be covered in the test and the results are below.
Note: I classified each answer by the assertiveness that I consider as good, average and bad.

Note 2: The article has 4 parts, each one for an exam area.

1.    Best practices: Architecture
1.1 Determines database storage strategy in InterSystems IRIS

[GOOD] Determines which databases should be included in a namespace

A namespace in InterSystems IRIS is a logical entity that provides access to data and code stored in multiple databases. Typically, a namespace includes:
1.    A default database for storing code, known as the routines database.
2.    A default database for storing data for persistent classes and any globals created, known as the globals database.
3.    A default database for temporary storage.
4.    Mappings that provide access to additional data and code stored in other databases. These mappings can refer to routines, class packages, entire globals, and specific global nodes in non-default databases. [1]
In a simple configuration, one namespace uses one database. However, it is often beneficial to separate the code and data into different databases for maintainability. For example, a namespace can be configured to use one database for routines (code) and another for globals (data). Additionally, mappings can be created to access specific globals from other databases. [2]
To create a namespace and assign databases to it, you can follow these steps:
1.    Create the databases you need.
2.    Create a new namespace and assign the databases for routines and globals.
3.    Optionally, configure mappings to access additional data or code from other databases. [3]

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Orientation for Programmers > Namespaces and Databases
2.Databases only appear in Classes Section of Management Portal
3.How to separate source code and data in different databases

 

[GOOD]  Recommends database architecture based on expected data growth

To recommend a database architecture based on expected data growth, consider using InterSystems IRIS’s sharding capabilities. Sharding is a horizontal scalability feature that partitions data storage and caching across multiple servers, forming a sharded cluster. This approach provides flexible, inexpensive performance scaling for queries and data ingestion while maximizing infrastructure value through efficient resource utilization. Sharding is particularly beneficial for applications involving large amounts of data retrieval, complex data processing, or high-volume, high-velocity data ingestion. By distributing query processing and caching across multiple systems, sharding allows near-linear scaling of compute and memory resources, enabling the design and maintenance of a cluster tailored to your workload. [1]

Sources
1.InterSystems IRIS Data Platform 2024.3 > Architecture > Scalability > Scalability Guide > Scale for Data Volume

 

[GOOD]  Structures data to support global mappings  

To structure data to support global mappings in InterSystems IRIS, you need to follow these steps:
1.    Defining Databases: Create a local database by logging into the Management Portal, navigating to System Administration > Configuration > System Configurations > Local Databases, and using the Database Wizard to create a new database. [1]
2.    Defining Namespaces: Create a namespace that uses local databases by logging into the Management Portal, navigating to System Administration > Configuration > System Configurations > Namespaces, and using the wizard to create a new namespace. [1]
3.    Mapping a Global: To map a global to a specific database, log into the Management Portal, navigate to System Administration > Configuration > System Configurations > Namespaces, select Global Mappings for the desired namespace, and create a new global mapping. Specify the global database location and the global name. [1]
4.    Subscript-Level Mapping: You can map parts of a global (subscripts) to different databases. This allows data to span multiple disks. For example, you can map ^x to one database and ^x(1) to another. [2]
5.    Programmatic Mapping: You can also define global mappings programmatically using the appropriate classes and methods provided by InterSystems IRIS. [3]
By following these steps, you can effectively structure your data to support global mappings across different databases.

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Orientation for Programmers > Useful Skills to Learn
2.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Multidimensional Storage > Mapping
3.How do we map Globals (holds data which spreads across different DBs) to multiple Databases

 

[GOOD]  Identifies implications of mirroring on application performance and availability

Mirroring in InterSystems Caché provides a high availability solution with automatic failover, which can significantly impact application performance and availability. Here are some key implications:
1.    Performance: Mirroring uses logical data replication to keep the backup system synchronized with the primary system. This approach minimizes the risks associated with physical replication, such as out-of-order updates and carry-forward corruption. However, network latency between the failover members is a critical factor in application performance. The physical locations of the failover members and the network connection between them should be chosen to minimize latency to ensure optimal performance [1][2].
2.    Availability: Mirroring provides a robust solution for both planned and unplanned downtime. It allows for rapid, unattended, automated failover between two Caché systems, ensuring high availability. The use of redundant components minimizes the risks associated with shared resources, and the logical data replication helps avoid single points of failure [1][3][4].
3.    Disaster Recovery: Mirroring can be configured with additional async members for disaster recovery. These async members can be geographically dispersed, providing a robust framework for distributed data replication and ensuring business continuity [1][3].
4.    Planned Downtime: Mirroring offers the flexibility to incorporate certain planned downtimes while minimizing the overall impact on the organization’s service level agreements (SLAs). This ensures that maintenance activities can be performed without significant disruption to the application [1][3].
5.    Automatic Failover: The automatic failover mechanism ensures that if the primary system becomes unavailable, the backup system can take over immediately, providing uninterrupted access to the databases without data loss. This is crucial for maintaining high availability and minimizing workflow and user disruption [1][3][2].
Overall, mirroring in InterSystems Caché enhances both the performance and availability of applications by providing a reliable, economical, and robust high-availability solution.

Sources
1.InterSystems Caché: Database Mirroring: A Technical Overview
2.InterSystems IRIS Data Platform 2024.3 > Architecture > Mirroring and High Availability > High Availability Guide > Mirroring Architecture and Planning
3.InterSystems Caché: Database Mirroring: An Executive Overview
4.Caché Mirroring 101: a Brief Guide and FAQ

 

[AVERAGE]  Identifies implications of IRIS upgrades on database architecture

InterSystems IRIS upgrades can have several implications on database architecture, particularly in the context of high availability (HA) and data management:
1.    High Availability Solutions: Upgrading InterSystems IRIS may require careful planning to ensure minimal downtime. For example, InterSystems IRIS mirroring allows for minimal downtime during upgrades if application code, classes, and routines are kept in separate databases from application data. This approach helps maintain high availability and ensures that the system remains operational during the upgrade process [1].
2.    Data Management and Scalability: Upgrades may also impact how data is managed and scaled. For instance, sharding is a significant horizontal scalability feature of InterSystems IRIS that partitions data storage and caching across multiple servers. This can provide flexible and efficient performance scaling for queries and data ingestion. Upgrading to a version that supports or enhances sharding capabilities can significantly improve the scalability and performance of the database [2].
3.    Upgrade Impact Checklist: InterSystems provides an Upgrade Impact Checklist that helps identify and manage the potential impacts of upgrading between different versions. This checklist includes considerations for changes in SQL behavior, data structures, and other critical aspects that could affect the database architecture. Using this tool can help ensure a smooth upgrade process by highlighting necessary adjustments and preparations [3].
These considerations highlight the importance of planning and utilizing available tools and features to manage the implications of IRIS upgrades on database architecture effectively.

Sources
1.InterSystems IRIS Data Platform 2024.3 > Architecture > Mirroring and High Availability > High Availability Solutions
2.InterSystems IRIS Data Platform 2024.3 > Architecture > Scalability > Scalability Guide > Scale for Data Volume
3.Introducing the Upgrade Impact Checklist

 

[AVERAGE]  Identifies implications of security requirements on database architecture

Security requirements have significant implications on database architecture. When deploying an application in a public cloud, it is crucial to follow your organization’s standard security policies or develop new ones specifically for cloud environments to maintain security compliance. Cloud deployments introduce the risk of data being outside client data centers and physical security control. Therefore, using database and journal encryption for data at rest and data in flight with AES and SSL/TLS encryption is highly recommended. Proper encryption key management procedures must be documented and followed to ensure data safety and prevent unauthorized access. Additionally, third-party firewall devices may be required for extra functionality such as intrusion detection and denial of service protection [1].
Furthermore, the security system in InterSystems IRIS is based on applying different security settings for users, roles, services, resources, privileges, and applications. Different security levels (Minimal, Normal, or Locked Down) can be chosen during installation, each with varying degrees of user engagement, available roles and services, and authentication methods. For example, the Locked Down level provides the highest security by disabling many services and requiring authentication for all enabled services [2].
In summary, security requirements necessitate careful planning and implementation of encryption, key management, and appropriate security settings to protect data and ensure compliance with organizational policies and regulations.

Sources
1.InterSystems Example Reference Architecture for Microsoft Azure Resource Manager (ARM)
2.Increasing the Security of the InterSystems IRIS DBMS

 

[AVERAGE]  Identifies costs and benefits of using InterSystems interoperability functionality  

InterSystems IRIS provides powerful interoperability functionality that offers several benefits, including the ability to eliminate data silos, integrate and orchestrate legacy and new services, normalize and transform data, design and implement business processes and rules, and optimize governance and forensics. This functionality supports various data formats and communication protocols, making it highly flexible for different integration needs. Additionally, it allows for the development of custom production components in multiple programming languages, enhancing its adaptability to specific requirements [1].
However, the costs associated with using InterSystems interoperability functionality can include the need for specialized knowledge to configure and manage the system, potential licensing fees, and the resources required for initial setup and ongoing maintenance. The complexity of integrating various systems and ensuring data consistency and security can also add to the overall cost [1].

Sources
1.InterSystems IRIS Data Platform 2024.3 > Connecting Systems

 

[GOOD]  Identifies benefits and tradeoffs for using InterSystems IRIS BI to augment usage of object and relational models

Using InterSystems IRIS BI (DeepSee) to augment the usage of object and relational models offers several benefits and tradeoffs:
Benefits:
1.    Unified Data Platform: InterSystems IRIS provides a comprehensive platform that integrates data management, analytics, and interoperability. This allows for seamless integration of BI capabilities with object and relational models, enabling real-time analytics and insights directly from transactional data [1].
2.    Flexibility: The platform supports multiple data models, including relational, object-oriented, and document-based models. This flexibility allows developers to choose the most appropriate model for their specific use case while leveraging the powerful analytics capabilities of IRIS BI [2].
3.    Real-Time Analytics: IRIS BI enables real-time analytics, allowing businesses to gain insights and make decisions based on the most current data. This is particularly beneficial for applications that require immediate insights, such as financial trading systems or healthcare applications [1].
4.    Scalability: The platform is designed to handle large datasets and high workloads, making it suitable for mission-critical applications that require high performance and scalability [2].
5.    Integration with Business Processes: IRIS BI can be embedded into business processes, enabling programmatic decisions based on real-time analyses. This integration helps automate and optimize business operations [1].
Tradeoffs:
1.    Complexity: Implementing a unified data platform that combines multiple data models and real-time analytics can be complex. It requires careful planning and expertise to ensure that all components work together seamlessly [1].
2.    Resource Intensive: Real-time analytics and the integration of multiple data models can be resource-intensive, requiring significant computational power and storage. Organizations need to ensure they have the necessary infrastructure to support these capabilities [1].
3.    Learning Curve: Leveraging the full capabilities of InterSystems IRIS BI may require a learning curve for developers and data analysts, particularly if they are not familiar with the platform’s features and tools [2].
Overall, using InterSystems IRIS BI to augment object and relational models provides significant advantages in terms of flexibility, real-time insights, and scalability, but it also comes with challenges related to complexity, resource requirements, and the learning curve.

 

[GOOD]  Identifies secure REST API design best practices

To design a secure REST API, consider the following best practices:
1.    Authentication and Authorization: Use methods like Basic Auth or OAuth 2.0 JWT Access Tokens to secure endpoints. Basic Auth involves checking the username and password provided in the HTTP Authorization header, while JWT tokens provide a more secure and flexible way to define privileges via scopes. [1]
2.    Endpoint Security: Secure individual endpoints by specifying which endpoints should allow anonymous access and which should not. This can be managed by defining a Web Application for your REST API and configuring it for unauthenticated access while specifying the required privileges for the application. [1]
3.    CORS Support: Enable Cross-Origin Resource Sharing (CORS) to allow restricted resources on a web page to be requested from another domain. This can be done by adding a parameter to the broker and overriding the OnHandleCorsRequest method to check request origin and allow access only from whitelisted domains. [2]
4.    API Management: Utilize tools for managing the API lifecycle, such as InterSystems API Manager, to handle the complexity of API development and ensure consistent and secure API operations. [2]
5.    Spec-First Development: Develop your APIs using a spec-first approach, which involves defining the API specification before writing any code. This helps in planning and ensuring that the API design meets security and functionality requirements. [2]
6.    Separation of Concerns: Implement a multitier architecture to separate different layers of your application, such as data, terminal API, web API, and client. This separation helps in managing complexity and enhancing security by isolating different parts of the application. [2]
By following these practices, you can create secure, robust, and maintainable REST APIs.

Sources
1.Securing Individual REST API Endpoints
2.REST Design and Development

 

1.2.    Determines data structures

[GOOD]  Differentiates between registered object, serial object, and persistent classes

In InterSystems IRIS, there are three main types of object classes: registered object classes, persistent classes, and serial classes.
1.    Registered Object Classes: These are subclasses of %RegisteredObject. They represent transient objects that can be created and manipulated in memory but are not saved to disk. Registered objects are useful for temporary data that does not need to be stored persistently. [1] [2]
2.    Persistent Classes: These are subclasses of %Persistent, which itself is a subclass of %RegisteredObject. Persistent classes add the ability to save objects to the database and retrieve them later. This makes them suitable for data that needs to be stored and accessed over time. Persistent objects are automatically projected to SQL tables, allowing them to be queried using SQL. [1] [2] [3]
3.    Serial Classes: These are subclasses of %SerialObject, which is also a subclass of %RegisteredObject. Serial classes are designed to be embedded within other objects, either transient or persistent. They are serialized into a string that represents their state and are typically used as properties within other objects. Serial objects cannot be saved or retrieved independently; they are always part of another object. [1] [2] [4]
Each type of class serves a different purpose and is used in different scenarios depending on whether the data needs to be transient, persistent, or embedded within other objects.

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Defining & Using Classes > Defining Classes
2.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Orientation for Programmers > Objects
3.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Defining & Using Classes > Persistent Objects
4.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Defining & Using Classes > Object-Valued Properties

 

[GOOD]  Determines indexes to add/update to improve performance

To determine which indexes to add or update to improve performance in InterSystems IRIS, you can follow these steps:
1.    Examine Query Performance: Use the SQL Query Optimizer to analyze the performance of your queries. The optimizer generates multiple query plans based on table statistics and chooses the most efficient one. You can use tools like the SQL Performance Analysis Toolkit to profile specific queries in detail [1].
2.    Define and Build Indexes: Indexes are crucial for optimizing SQL queries. You can define various types of indexes such as standard indexes, bitmap indexes, bitslice indexes, and columnar indexes based on the data and query requirements. For example, bitmap indexes are useful for columns with few distinct values, while bitslice indexes are good for range conditions and sums [2].
3.    Tune Table: Use the Tune Table utility to gather statistics about the data in your tables, such as the number of rows, distribution of distinct values, and average length of values. These statistics help the Query Optimizer make better decisions about query plans [1][3].
4.    Query Plan Analysis: Check if your queries are using the defined indexes by examining the query plan. If the plan indicates that the query is reading the master map instead of an index map, it suggests that an index should be created or updated to improve performance [4].
5.    Index Maintenance: Regularly maintain your indexes by rebuilding them if necessary. You can use the %BuildIndices method to build indexes for existing data. Additionally, you can validate indexes using the %ValidateIndices method to ensure they are consistent and functioning correctly [5].
By following these steps, you can identify and implement the necessary indexes to enhance the performance of your SQL queries in InterSystems IRIS.

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > SQL > SQL Performance Overview
2.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > SQL > Defining a High-Performance Schema > Define and Build Indexes
3.The One Query Performance Trick You NEED to Know? Tune Table!
4.A Peek at IRIS Data Platform Performance Strategies
5.Inserting an index without reconstruction

[GOOD]  Describes relationship between globals, objects, and SQL  

InterSystems IRIS is a multi-model DBMS that supports different data models, including globals, objects, and SQL, and allows you to access the same data through different APIs depending on your needs.
1.    Globals: Globals are the fundamental data storage mechanism in InterSystems IRIS. They are multidimensional key-value pairs that are automatically saved to disk. Globals can be accessed directly for high performance and flexibility, and they are used internally by both the object and SQL models. Globals are particularly useful for tasks requiring fast data access and manipulation without the overhead of SQL or object layers [1][2].
2.    Objects: InterSystems IRIS supports object-oriented programming through persistent classes. When you define a persistent class, it automatically generates a storage structure that maps the class properties to global nodes. This allows you to work with data as objects in your application code while the data is stored in globals. The object model provides a convenient way to encapsulate data and behavior, making it easier to manage complex data structures [3][4].
3.    SQL: InterSystems IRIS also provides a robust SQL engine that allows you to interact with data using standard SQL queries. Persistent classes are automatically projected as SQL tables, where class properties correspond to table columns. This means you can use SQL to perform operations like querying, inserting, updating, and deleting data. The SQL engine translates these operations into global accesses, ensuring efficient data manipulation [1][3][4].
The relationship between these models is seamless, allowing you to choose the most appropriate access method for your use case. For example, you can use SQL for complex queries and reporting, objects for application logic, and globals for high-performance data access. This flexibility is a key advantage of InterSystems IRIS, enabling you to build applications that leverage the strengths of each data model [3][4].

Sources
1.SQL/Non-SQL
2.Globals Are Magic Swords For Managing Data. Part 1
3.Classes, tables and globals - how it all works?
4.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Orientation for Programmers > Persistent Objects and InterSystems IRIS SQL

[GOOD]  Determines when streams are the appropriate data type

Streams are appropriate data types when you need to store extremely large amounts of data that exceed the string length limit. InterSystems IRIS provides several stream classes for different types of data:
1.    Character Streams: Use %Stream.GlobalCharacter for large quantities of text stored in global nodes.
2.    Binary Streams: Use %Stream.GlobalBinary for binary data such as images, audio, or video stored in global nodes.
3.    File Streams: Use %Stream.FileCharacter and %Stream.FileBinary for character and binary data stored in external files, respectively.
4.    Temporary Streams: Use %Stream.TmpCharacter and %Stream.TmpBinary for temporary character and binary data that do not need to be saved.
Streams are useful for handling data types like PDF, spreadsheet, RTF, plain text, CSV, and XML, as they can morph between binary and character streams depending on the type of data being persisted. This flexibility makes streams a good choice for storing documents received from various sources. [1] [2] [3]

Sources
1.Global-Streams-to-SQL
2.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > SQL > Using SQL > Storing and Using Stream Data (BLOBs and CLOBs)
3.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Defining & Using Classes > Streams

[GOOD]  Describes InterSystems IRIS support for JSON and XML

InterSystems IRIS provides robust support for both JSON and XML data formats.
For JSON, InterSystems IRIS includes integrated support within ObjectScript, allowing you to work with JSON data structures as easily as with objects or tables. You can create and manipulate dynamic entities at runtime using classes like %Library.DynamicObject and %Library.DynamicArray. These classes provide methods for JSON serialization and deserialization, enabling conversion between dynamic entities and JSON strings. You can also use JSON string literals or ObjectScript dynamic expressions to define values. [1] [2]
For XML, InterSystems IRIS supports various XML standards, including XML 1.0, Namespaces in XML 1.0, XML Schema 1.0, XPath 1.0, and SOAP encoding. It provides tools for XML canonicalization, encryption, and signature, and includes two XSLT processors (Xalan for XSLT 1.0 and Saxon for XSLT 2.0 and 3.0). The SAX Parser in InterSystems IRIS uses the standard Xerces-C++ library, ensuring compliance with the XML 1.0 recommendation. [3]

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Web Applications & APIs > REST and JSON > Using JSON > Using JSON in ObjectScript
2.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > Web Applications & APIs > REST and JSON > Using JSON > Converting Dynamic Entities to and from JSON
3.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > XML Tools > XML Standards

 

1.3.    Plans data lifecycle

[AVERAGE]  Evaluates strategies for data storage and retrieval (e.g., MDX, SQL, object)

InterSystems IRIS supports various strategies for data storage and retrieval, including MDX, SQL, and object access. ObjectScript, the procedural programming language used in IRIS, allows you to access data directly from globals, which are persistent multidimensional sparse arrays. Additionally, you can access data through its native object and SQL support, providing flexibility in how you manage and retrieve your data [1].
For example, you can use SQL for structured queries, MDX for multidimensional data analysis, and object access for working with data in an object-oriented manner. Each method has its own use cases and advantages depending on the specific requirements of your application.

Sources
1.InterSystems IRIS Data Platform 2024.3 > Embedded Language Development > ObjectScript
2.Newbie's Corner Index

[GOOD]  Manages data life cycles (aka CRUD)

InterSystems IRIS provides robust support for managing data life cycles, commonly referred to as CRUD (Create, Read, Update, Delete) operations. Here are some key points on how IRIS handles these operations:
1.    API for Table Operations: The application provides a REST API to interact with the data stored in the user’s tables. The API supports five basic operations:
o    Get All Records by Condition: Retrieve records from a table based on specific conditions or filters.
o    Get All Records: Fetch all records from a table without any filters.
o    Save: Add new records to a table.
o    Update: Modify existing records in a table.
o    Delete: Remove records from a table. [1]
2.    RESTForms Project: This project provides a generic REST API backend for modern web applications, allowing CRUD operations over exposed classes. It includes features such as:
o    CRUD over exposed class: Get class metadata, create, update, and delete class properties.
o    CRUD over object: Get, create, update, and delete objects.
o    R over object collections (via SQL): Protected from SQL injections.
o    Self-discovery: Get a list of available classes and their metadata, and perform CRUD operations based on that metadata. [2]
3.    Example Implementation: In a practical example, a REST service can be created to handle CRUD operations for a class. This involves creating methods to handle GET, POST, PUT, and DELETE requests, which correspond to reading, creating, updating, and deleting records, respectively. [3]
These features and tools make it straightforward to manage data life cycles within InterSystems IRIS, providing both flexibility and security for data operations.

Sources
1.Inside Database Management Tool
2.RESTForms - REST API for your classes
3.Let's write an Angular 1.x app with a Caché REST backend - Part 9

[AVERAGE]  Describes expected application performance as a function of data volumes, users, and processes

Application performance can be influenced by various factors such as data volumes, the number of users, and the processes involved. To manage and optimize performance, it is essential to monitor and analyze system metrics continuously.
1.    Monitoring Tools: InterSystems provides several tools for monitoring application performance. For example, the Caché pButtons utility generates a readable HTML performance report from log files it creates, which includes metrics like Caché setup, performance metrics, operating system resource metrics, and more. Regularly collecting and reviewing these metrics can help identify performance issues early [1].
2.    Application Performance Management (APM): APM focuses on monitoring performance from the users’ perspective, measuring business transactions, and response times. It helps in understanding how well the application is performing and identifying areas of concern. This approach complements system performance monitoring by providing insights into the level of service provided to users [2].
3.    Distributed Caching and ECP: For scaling applications to handle increased user volume and data, InterSystems IRIS uses distributed caching with the Enterprise Cache Protocol (ECP). This allows the application to distribute the workload across multiple application servers, each maintaining its own cache. This setup helps in efficiently managing user requests and maintaining performance by reducing the load on a single server [3][4].
4.    Capacity Planning: Proper capacity planning is crucial for ensuring that the system can handle the expected load. This involves sizing the system’s CPU, memory, and storage requirements based on the application’s needs. For example, using SPECint benchmarks can help translate CPU usage between different processor types, aiding in right-sizing the hardware for the application [5].
By leveraging these tools and strategies, you can effectively manage and optimize application performance as data volumes, user numbers, and processes scale.

Sources
1.InterSystems Data Platforms and performance – Part 1
2.What is APM?
3.Data Platforms and Performance - Part 7 ECP for performance, scalability and availability
4.InterSystems IRIS Data Platform 2024.3 > Architecture > Scalability > Basics: Distributed Caching
5.InterSystems Data Platforms and performance – Part 3: Focus on CPU

讨论 (0)3
登录或注册以继续