Search for:
User-Friendly External Smartblobs Using a Shadow Directory

I am very excited about the HCL InformixĀ® 15 external smartblob feature.

If you are not familiar with them, external smartblobs allow the user to store actual Binary Large Object (blob) and Character Large Object (clob) data external to the database. Metadata about that external storage is maintained by the database.

Notes: This article does NOT discuss details of the smartblobs feature itself, but rather proposes a solution to make the functionality more user-friendly. For details on feature behavior, setup, and new functions, see the documentation.

At the writing of this blog, v15.0 does not have the ifx_lo_path function defined, as required below.Ā  This has been reported to engineering.Ā  The workaround is to create it yourself with the following command:

create dba function ifx_lo_path(blob)
Ā Ā returns lvarchar
Ā Ā external name '(sq_lo_path)'
Ā Ā language C;

This article also does not discuss details of client programming required to INSERT blobs and clobs into the database.

The external smartblob feature was built for two main reasons:

1. Backup size

Storing blobs in the database itself can cause the database to become extremely large. As such, performing backups on the database takes an inordinate amount of time, and 0 level backups can be impossible. Offloading the actual blob contents to an external file system can lessen the HCL Informix backup burden by putting the blob data somewhere else. The database still governs the storage of, and access to, the blob, but the physical blob is housed elsewhere/externally.

2. Easy access to blobs

Users would like easy access to blob data, with familiar tools, without having to go through the database.Ā 

Using External Smartblobs in HCL Informix 15

HCL Informix 15 introduces external smartblobs. When you define an external smartblob space, you specify the external directory location (outside the database) where you would like the actual blob data to be stored. Then you assign blob column(s) to that external smartblob space when you CREATE TABLE. When a row is INSERTed, HCL Informix stores the blob data in the defined directory using an internal identifier for the filename.

Hereā€™s an example of a customer forms table: custforms (denormalized and hardcoded for simplicity). My external sbspace directory is /home/informix/blog/resources/esbsp_dir1.

CREATE TABLE custforms(formid SERIAL, company CHAR(20), year INT, lname CHAR(20), 
formname CHAR(50), form CLOB) PUT form IN (esbsp);

Here, I INSERT a 2023 TaxForm123 document from a Java program for a woman named Sanchez, who works for Actian:

try(PreparedStatement p = c.prepareStatement("INSERT INTO custforms 
(company, year, lname, formname, form) values(?,?,?,?,?)");

FileInputStream is = new FileInputStream("file.xml")) {
p.setString(1, "Actian");
p.setString(2, "2023");
p.setString(3, "Sanchez");
p.setString(4, "TaxForm123");
p.setBinaryStream(5, is);
p.executeUpdate();
}

After I INSERT this row, my external directory and file would look like this:

[informix@schma01-rhvm03 resources]$ pwd
/home/informix/blog/resources
[informix@schma01-rhvm03 resources]$ ls -l esbsp*
-rw-rw---- 1 informix informix 10240000 Oct 17 13:22 esbsp_chunk1

esbsp_dir1:
total 0
drwxrwx--- 2 informix informix 41 Oct 17 13:19 IFMXSB0
[informix@schma01-rhvm03 resources]$ ls esbsp_dir1/IFMXSB0
LO[2,2,1(0x102),1729188125]

Where LO[2,2,1(0x102),1729188125]is an actual file that contains the data that I could access directly. The problem is that if I want to directly access this file for Ms. Sanchez, I would first have to figure out that this file belongs to her and is the tax document I want. Itā€™s very cryptic!

A User-Friendly Smartblob Solution

When talking to Informix customers, they love the new external smartblobs feature but wish it could be a little more user-friendly.

As in the above example, instead of putting Sanchezā€™s 2023 TaxForm123 into a general directory called IFMXSB0 in a file called LO[2,2,1(0x102),1729188125, which together are meaningless to an end-user, wouldnā€™t it be nice if the file was located in an intuitive place like /home/forms/Actian/2024/TaxForm123/Sanchez.xml or something similarā€¦something meaningfulā€¦how YOU want it organized?

Having HCL Informix automatically do this is a little easier said than done, primarily because the database would not intuitively know how any one customer would want to organize their blobs. What exact directory substructure? From what column or columns do I form the file names? What order? All use cases would be different.

Leveraging a User-Friendly Shadow Directory

The following solution shows how you can create your own user-friendly logical locations for your external smartblobs by automatically maintaining a lightweight shadow directory structure to correspond to actual storage locations. The solution uses a very simple system of triggers and stored procedures to do this.

Note: Examples here are shown on Linux, but other UNIX flavors should work also.

How to Set Up in 4 Steps

For each smartblob column in question

STEP 1: Decide how you want to organize access to your files.

Decide what you want the base of your shadow directory to be and create it. In my case for this blog, it is: /home/informix/blog/resources/user-friendly. You could probably implement this solution without a set base directory (as seen in the examples), but that may not be a good idea because users would unknowingly start creating directories everywhere.

STEP 2: Create a create_link stored procedure and corresponding trigger for INSERTs.

This procedure makes sure that the desired data-driven subdirectory structure exists from the base (mkdir -p), then forms a user-friendly logical link to the Informix smartblob file.Ā Ā Ā  You must pass all the columns to this procedure from which you want to form the directory structure and filename from the trigger.

CREATE PROCEDURE

CREATE PROCEDURE create_link (p_formid INT, p_company CHAR(20), p_year INT,
p_lname CHAR(20), p_formname CHAR(50))
DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500);
DEFINE v_ifmxname CHAR(500);
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';
-- make sure directory tree exists
LET v_oscommand = 'mkdir -p ' || TRIM(v_basedir) || '/' || TRIM(p_company) || '/' || 
TO_CHAR(p_year);
SYSTEM v_oscommand; 

-- form full link name 
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_company) || '/' || TO_CHAR(p_year) 
|| '/' || TRIM(p_lname) || '.' || TRIM(p_formname) || '.' || TO_CHAR(p_formid);

-- get the actual location 
SELECT IFX_LO_PATH(form::LVARCHAR) INTO v_ifmxname FROM custforms WHERE formid = p_formid; 

-- create the os link 
LET v_oscommand = 'ln -s -f ' || '''' || TRIM(v_ifmxname) || '''' || ' ' || v_custlinkname; 
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER ins_tr INSERT ON custforms REFERENCING new AS post
FOR EACH ROW(EXECUTE PROCEDURE create_link (post.formid, post.company,
post.year, post.lname, post.formname));

STEP 3: Create a delete_link stored procedure and corresponding trigger for DELETEs.

This procedure will delete the shadow directory link if the row is deleted.

CREATE PROCEDURE

CREATE PROCEDURE delete_link (p_formid INT, p_company CHAR(20), p_year INT,
p_lname CHAR(20), p_formname CHAR(50))
DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500); 
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';
-- form full link name
LET v_custlinkname = TRIM(v_basedir) || '/' ||
TRIM(p_company) || '/' || TO_CHAR(p_year) || '/' || TRIM(p_lname) || '.'
|| TRIM(p_formname) || '.' || TO_CHAR(p_formid);
-- remove the link
LET v_oscommand = 'rm -f -d ' || v_custlinkname;
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER del_tr DELETE ON custforms REFERENCING old AS pre FOR EACH ROW
(EXECUTE PROCEDURE delete_link (pre.formid, pre.company, pre.year, pre.lname, pre.formname));

STEP 4: Create a change_link stored procedure and corresponding trigger for UPDATEs, if desired.Ā Ā  In my example, Ms. Sanchez might get married to Mr. Simon and an UPDATE to her last name in the database occurs. I may then want to change all my user-friendly names from Sanchez to Simon.Ā  This procedure deletes the old link and creates a new one.

Notice the update trigger only must fire on the columns that form your directory structure and filenames.

CREATE PROCEDURE

CREATE PROCEDURE change_link (p_formid INT, p_pre_company CHAR(20), 
p_pre_year INT, p_pre_lname CHAR(20), p_pre_formname CHAR(50), p_post_company CHAR(20), 
p_post_year INT, p_post_lname CHAR(20), p_post_formname CHAR(50))

DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500);
DEFINE v_ifmxname CHAR(500);
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';

-- get rid of old

-- form old full link name
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_pre_company) || '/' || 
TO_CHAR(p_pre_year) || '/' || TRIM(p_pre_lname) || '.' || TRIM(p_pre_formname) || '.' 
|| TO_CHAR(p_formid) ;

-- remove the link and empty directories
LET v_oscommand = 'rm -f -d ' || v_custlinkname;
SYSTEM v_oscommand;

-- form the new
-- make sure directory tree exists
LET v_oscommand = 'mkdir -p ' || TRIM(v_basedir) || '/' || TRIM(p_post_company) || '/' || 
TO_CHAR(p_post_year);
SYSTEM v_oscommand;

-- form full link name
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_post_company) || '/' || 
TO_CHAR(p_post_year) || '/' || TRIM(p_post_lname) || '.' || TRIM(p_post_formname) 
|| '.' || TO_CHAR(p_formid) ;

-- get the actual location
-- this is the same as before as id has not changed
SELECT IFX_LO_PATH(form::LVARCHAR) INTO v_ifmxname FROM custforms WHERE formid = p_formid;

-- create the os link
LET v_oscommand = 'ln -s -f ' || '''' || TRIM(v_ifmxname) || '''' || ' ' || v_custlinkname;
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER upd_tr UPDATE OF formid, company, year, lname, formname ON custforms
REFERENCING OLD AS pre NEW as post

FOR EACH ROW(EXECUTE PROCEDURE change_link (pre.formid, pre.company, pre.year, pre.lname, 
pre.formname, post.company, post.year, post.lname, post.formname));

Results Example

Back to our example.

With this infrastructure in place, now in addition to the Informix-named file being in place, I would have these user-friendly links on my file system that I can easily locate and identify.

INSERT

[informix@schma01-rhvm03 2023]$ pwd
/home/informix/blog/resources/user-friendly/Actian/2023
[informix@schma01-rhvm03 2023]
$ ls Sanchez.TaxForm123.2

If I do an ls -l, you will see that it is a link to the Informix blob file.

[informix@schma01-rhvm03 2023]$ ls -l
total 0
lrwxrwxrwx 1 informix informix 76 Oct 17 14:20 Sanchez.TaxForm123.2 -> 
/home/informix/blog/resources/esbsp_dir1/IFMXSB0/LO[2,2,1(0x102),1729188126]

UPDATE

If I then update her last name with UPDATE custforms SET lname = ā€˜Simonā€™ where formid=2,my file system now looks like this:

[informix@schma01-rhvm03 2023]$ ls -l
lrwxrwxrwx 1 informix informix 76 Oct 17 14:25 Simon.TaxForm123.2 -> 
/home/informix/blog/resources/esbsp_dir1/IFMXSB0/LO[2,2,1(0x102),1729188126]

DELETE

If I then go and DELETE this form with DELETE FROM custforms where formid=2, my directory structure looks like this:

[informix@schma01-rhvm03 2023]$ pwd
/home/informix/blog/resources/user-friendly/Actian/2023
[informix@schma01-rhvm03 2023]$ ls
[informix@schma01-rhvm03 2023]$

We Welcome Your Feedback

Please enjoy the new HCL Informix15 external smartblob feature.

I hope this idea can make external smartblobs easier for you to use. If you have any feedback on the idea, especially on enhancements or experience in production, please feel free to contact me at mary.schulte@hcl-software.com. I look forward to hearing from you!

Find out more about the launch of HCL Informix 15.

Notes

1. Shadow directory permissions. In creating this example, I did not explore directory and file permissions, but rather just used general permissions settings on my sandbox server. Likely, you will want to control permissions to avoid some of the anomalies I discuss below.

2. Manual blob file delete. With external smartblobs, if permissions are not controlled, it is possible that a user might somehow delete the physical smartblob file itself from its directory. HCL Informix, itself, cannot control this from happening. In the event it does happen, HCL Informix does NOT delete the corresponding row; the blob file will just be missing. There may be aspects to links that can automatically handle this, but I have not investigated them for this blog.

3. Link deletion in the shadow directory. If permissions are not controlled, it is possible that a user might delete a logical link formed by this infrastructure. This solution does not detect this. If this is an issue, I would suggest a periodic maintenance job that cross references the shadow directory links to blob files to detect missing links. For those blobs with missing links, write a database program to look up the rowā€™s location with the IFX_LO_PATH function, and reform the missing link.

4. Unique identifiers. I highly recommend using unique identifiers in this solution. In this simple example, I used formid. You donā€™t want to clutter things up, of course, but depending on how you structure your shadow directories and filenames, you may need to include more unique identifiers to avoid directory and link names duplication.

5. Empty directories. I did not investigate if there are options to rm in the delete stored procedure to clean up empty directories that might remain if a last item is deleted.

6. Production overhead. It is known that excessive triggers and stored procedures can add overhead to a production environment. For this blog, it is assumed that OLTP activity on blobs is not excessive, therefore production overhead should not be an issue. This being said, this solution has NOT been tested at scale.

7. NULL values. Make sure to consider the presence and impact of NULL values in columns used in this solution. For simplicity, I did not handle them here.

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

Ā 

The post User-Friendly External Smartblobs Using a Shadow Directory appeared first on Actian.


Read More
Author: Mary Schulte

Securing Your Data With Actian Vector

The need for securing data from unauthorized access is not new. It has been required by laws for handling personally identiļ¬able information (PII) for quite a while. But the increasing use of data services in the cloud for all kinds of proprietary data that is not PII now makes data security an important part of most data strategies.

This is the start of a series of blog posts that take a detailed look at how data security can be ensured with Actian Vector. The first post explains the basic concept of encryption at rest and how Actian Vectorā€™s Database Encryption functionality implements it.

Understanding Encryption at Rest

Encryption at rest refers to encryption of data at rest, which means data that is persisted, usually on disk or in cloud storage. This encryption can be used in a database system that is mainly user data in tables and indexes, but also includes the metadata describing the organization of the user data. The main purpose of encryption at rest is to secure the persisted data from unauthorized direct access on disk or in cloud storage, that is without a connection to the database system.

The encryption can be transparent to the database applications. In this case, encryption and decryption is managed by the administrator, usually at the level of databases. The application then does not need to be aware of the encryption. It connects to the database to access and work with the data as if there is no encryption at all. In Actian Vector, this type of encryption at rest is called database encryption.

Encryption at the application level, on the other hand, requires the application to handle the encryption and decryption. Often this means that the user of the application has to provide an encryption key for both, the encryption (e.g. when data is inserted) and the decryption (e.g. when data is selected). While more complicated, it provides more control to the application and the user.

For example, encryption can be applied more ļ¬ne grained to speciļ¬c tables, columns in tables, or even individual record values in table columns. It may be possible to use individual encryption keys for diļ¬€erent data values. Thus, users can encrypt their private data with their own encryption key and be sure that without having this encryption key, no other user can see the data in clear text. In Actian Vector, encryption at the application level is referred to as function-based encryption.

Using Database Encryption in Actian Vector

In Actian Vector, the encryption that is transparent to the application works at the scope of a database and therefore is called database encryption. Whether a database is encrypted or not is determined with the creation of the database and cannot be changed later. When a database is created with database encryption, all the persisted data in tables and indexes, as well as the metadata for the database, is encrypted.

The encryption method is 256-bit AES, which requires a 32 byte symmetric encryption key. Symmetric means that the same key is used to encrypt and decrypt the data. This key is individually generated for each encrypted database and is called a database (encryption) key.

To have the database key available, it is stored in an internal system ļ¬le of the database server, where it is protected by a passphrase. This passphrase is provided by the user when creating the database. However, the database key is not used to directly encrypt the user data. Instead, it is used to encrypt, i.e. protect, yet another set of encryption keys that in turn are used to encrypt the user data in the tables and indexes. This set of encryption keys is called table (encryption) keys.

Once the database is created, the administrator can use the chosen passphrase to ā€œlockā€ the database. When the database is locked, the encrypted data cannot be accessed. Likewise, the administrator also uses the passphrase to ā€œunlockā€ a locked database and thus re-enable access to the encrypted data. When the database is unlocked, the administrator can change the passphrase. If desired, it is also possible to rotate the database key when changing the passphrase.

The rotation of the database key is optional, because it means that the whole container of the table keys needs to be decrypted with the old database key to then re-encrypt it with the new database key. Because this container of the table keys also contains other metadata, it can be quite large and thus the rotation of the database key can become a slow and computationally expensive operation. Database key rotation therefore is only recommended if there is a reasonable suspicion that the database key was compromised. Most of the time, changing only the passphrase should be suļ¬€icient. And it is done quickly.

With Actian Vector it is also possible to rotate the table encryption keys. This is done independently from changing the passphrase and the database key, and can be performed on a complete database as well as on individual tables. For each key that is rotated, the data must be decrypted with the old key and re-encrypted with the new key. In this case, we are dealing with the user data in tables and indexes. If this data is very large, the key rotation can be very costly and time consuming. This is especially true when rotating all table keys of a database.

A typical workļ¬‚ow of using database encryption in Actian Vector:

  • Create a database with encryption:
      1. createdb -encrypt <database_name>

This command prompts the user twice for the passphrase and then creates the database with encryption. The new database remains unlocked, i.e. it is readily accessible, until it is explicitly locked or until shutdown of the database system.

It is important that the creator of the database remembers the provided passphrase because it is needed to unlock the database and make it accessible, e.g. after a restart of the database system.

  • Lock the encrypted database:
      1. Connect to the unlocked database with the Terminal Monitor:
        sql <database_name>
      2. SQL to lock the database:
        DISABLE PASSPHRASE '<user supplied passphrase>'; g

The SQL statement locks the database. New connect attempts to the database are rejected with a corresponding error. Sessions that connected previously can still access the data until they disconnect.

To make the database lock also immediately eļ¬€ective for already connected sessions, additionally issue the following SQL statement:

      1. CALL X100(TERMINATE); g
  • Unlock the encrypted database:
      1. Connect to the locked database with the Terminal Monitor and option ā€œ-no_x100ā€:
        sql -no_x100 <database_name>
      2. SQL to unlock the database:
        ENABLE PASSPHRASE '<user supplied passphrase>'; g

The connection with the ā€œ-no_x100ā€ option connects without access to the warehouse data, but allows the administrative SQL statement to unlock the database.

  • Change the passphrase for the encrypted database:
      1. Connect to the unlocked database with the Terminal Monitor:
        sql <database_name>
      2. SQL to change the passphrase:
        ALTER PASSPHRASE '<old user supplied passphrase>' TO
        '<new passphrase>'; g

Again, it is important that the administrator remembers the new passphrase.

After changing the passphrase for an encrypted database, it is recommended to perform a new database backup (a.k.a. ā€œdatabase checkpointā€) to ensure continued full database recoverability.

  • When the database is no longer needed, destroy it:
      1. destroydb <database_name>

Note that the passphrase of the encrypted database is not needed to destroy it. The command can only be performed by users with the proper privileges, i.e. the database owner and administrators.

This first blog post in the database security series explained the concept of encryption at rest and how transparent encryption ā€” in Actian Vector called Database Encryption ā€” is used.

The next blog post in this series will take a look at function-based encryption in Actian Vector.

The post Securing Your Data With Actian Vector appeared first on Actian.


Read More
Author: Martin Fuerderer

Actian Zen: The Market-Leading Embedded Database ā€“ Proven!

Get to Know Actian Zen

Actian Zen is a high-performance, embedded database management system designed for efficient data management in various applications, particularly IoT and edge computing. It offers several key features:

  • High Performance: Optimized for real-time data processing and quick response times.
  • Scalability: Can handle increasing data volumes and diverse endpoints.
  • Reliability: Ensures data integrity and availability even in challenging environments.
  • Security: Provides robust security features to protect sensitive data.
  • Flexibility: Supports NoSQL and SQL data access.
  • Easy Integration: Integrates with applications and devices using API.
  • Low Maintenance: Minimal administrative overhead.

With more than 13,000 customers, Actian Zen has been utilized around the world across multiple industries to capture data generated by mobile devices, IoT sensors, edge gateways and even complex machinery, giving its users a very high level of confidence in reporting performance at the edge.

Putting Zen Through its Paces: The TPCx-IoT Benchmark

Actian has enjoyed terrific success with its Zen customer base, and while in turn, its customers have greatly benefited from Zenā€™s strong performance. At the same time, Actian wanted to have a third-party review, comparison, and benchmark report on the performance of the product itself alongside similar market offerings in an unbiased fashion. In October 2024, Actian commissioned the McKnight Consulting Group to run a TCPx-IoT benchmark against two of its key competitors; MongoDB and MySQL.

TPCx-IoT is a benchmark developed by the Transaction Processing Performance Council (TPC) to measure the performance, scalability, and price-performance of IoT (Internet of Things) data ingestion systems. It simulates real-time data ingestion and processing from IoT devices, evaluating a systemā€™s ability to handle large volumes of time-series data efficiently.

Key features of the TPCx-IoT benchmark include:

  • Real-World Simulation: The benchmark simulates a realistic IoT scenario with a large
    number of devices generating data.
  • Performance Metrics: It measures performance metrics like throughput (IoTps),
    latency, and price-performance.
  • Vendor-Neutral: It provides a fair and objective comparison of different IoT data
    management solutions.
  • Scalability Testing: It evaluates the systemā€™s ability to handle increasing data volumes
    and device counts.

By using the TPCx-IoT benchmark, organizations can compare the performance of different IoT data management solutions and select the best solution for their specific needs.

Discussion of Results: Actian Zen is a Powerful IoT Engine

The results of the benchmark test showed that Actian Zen was far superior to its key competitors in two important areas: throughput and latency.

  • Throughput: Actian Zen processes data significantly faster than other offerings, reaching up to 7,902 records per second compared to MongoDBā€™s at 2,099. MySQL lags far behind at 162 records per second. This means that Actian Zen has a throughput capability up to 50x the competition!
    zen throughput benchmark results
  • Latency: Actian Zen consistently demonstrates the lowest latency (time taken to process each record) across various sensor configurations, displaying up to 650x lower latency than the competition. MySQL exhibits the highest latency.
    zen latency benchmark

According to the McKnight Consulting Group, ā€œIn our evaluation of various embedded databases, Actian Zen emerged as a very compelling solution for enterprise-grade IoT workloads.ā€

Given the importance of real-time data availability at the edge, it is critical that both throughput and latency performance is as strong as they can be, especially given the myriad use cases that exist throughout various industries where having a true, confident view of current endpoint performance is critical, be it within healthcare, logistics, transportation, or another industry. This proves why so many customers, like Global Shop Solutions and Taifun Software AG have confidence in Actian Zen and why so many other organizations are taking a closer lookā€¦

Next Steps: Get the Report!

Curious about the benchmark? Want to know more? Hereā€™s how to get started:

  • To read the full TPCx-IoT benchmark report from the McKnight Consulting Group, click here.
  • To find out more about Zen, check out our website here.

The post Actian Zen: The Market-Leading Embedded Database ā€“ Proven! appeared first on Actian.


Read More
Author: Phil Ostroff

Experience Near-Unlimited Storage Capacity With HCL InformixĀ® 15

We are thrilled to unveil HCL InformixĀ® 15, re-imagined for organizations looking for the best way to modernize out-of-support IBMĀ® InformixĀ® applications. Our customers love HCL Informix because it is fast, reliable, and scalable. With the release of HCL Informix 15, we build upon this proud heritage with:

  • HCL Informix 4GL, a fourth-generation business application development environment that is designed to simplify the building of data-centric business applications, now available from Actian.
  • Larger row and page addresses that enhance scalability for large-scale data storage and processing. The new maximum capacity for a single instance is four times the estimated size of the internet.
  • External smartblobs enables the storage of binary large objects like static documents, videos and photos in an external file system to facilitate faster archiving.
  • Invisible indexes help developers and DBAs fine-tune queries by identifying which indexes are critical to specific queries by flexibly omitting them to see if they impact query runtime.Ā 

These capabilities fortify HCL Informixā€™s already solid foundation to underpin the next generation of mission-critical applications. They reflect our vision for a more powerful offering that guarantees seamless business continuity and secures the longevity of your organizationā€™s existing applications.

HCL Informix 15 now includes cloud-enabled product capabilities including a Kubernetes containerization deployment option and updated REST APIs (previously only available in HCL OneDB). For customers using HCL OneDB 1.0 and 2.0, we will adhere to the announced lifecycle dates and work with you on a recommended in-place upgrade to HCL Informix.

HCL Informix customers like Equifax are looking forward to taking advantage of these new capabilities to improve their business use cases in the near future.

ā€œHCL Informix 15 will empower Equifax to quickly process a steady stream of payments, claims decisions, tax verifications, and more, enabling us to make data-driven decisions,ā€ said Nick Fuller, Associate Vice President of Technology at Equifax. ā€œIts capacity to handle vast amounts of data gives us confidence in its ability to meet our demand for rapid and efficient processing.ā€

Watch the Webinar >

Building an Advanced Database for Modern Enterprise Applications

4GL: Easily maintain and recompile existing 4GL applications

hcl-informix-4gl

While many IBM Informix customers are familiar with 4GL, Actian is now offering HCL Informix 4GL and HCL Informix SQL for the first time.Ā  HCL Informix customers can leverage 4GL and ISQL to develop and debug applications, including building new menus, forms, screens, and reports with ease. 4GL reduces the time it takes to build and maintain HCL Informix applications and perform database operations like querying, updating, and managing data. Informix 4GL has a powerful report writer that enables the creation of complex reports. This capability is particularly useful for generating business reports from data stored in HCL Informix.

HCL Informix 4GL accelerates the building of applications such as:

  • Accounting Systems: Track money owed by and to the business, including invoicing, payment processing, and reports.
  • Inventory Management Systems: Manage storage locations, stock movements, and inventory audits.
  • Human Resources Systems: Maintain detailed records of employee information, performance, and benefits.Ā 

HCL Informix 15 Server Re-Architected for Massive Storage Capacity Improvement

Larger Row and Page Addresses: Manage Large Data Sets Without Compression

Have peace of mind knowing that data volume limitations are an issue of the past with HCL Informix 15. That means improved reliability and better use of resources because organizations wonā€™t need to compress or fragment tables.

When Informix Turbo launched in 1989, Informix architects believed 4 bytes would more than suffice for uniquely addressing each row so each page could hold a max of 255 rows and each table could have a maximum of 16.7 million pages. Now, some of the largest HCL Informix customers are pushing those original limits to their edge.Ā  While it is possible to fragment tables to get around the max page limit, thatā€™s an imperfect solution at scale. So weā€™ve expanded storage limits dramatically so max storage capacity is half a yottabyte, four times the estimated size of the internet.

large-data-sets-hcl-informix

External Smartblobs: Store Large Objects With Ease

external-smartblobs
Large objects like video or audio have traditionally been difficult for transactional databases like HCL Informix to manage because they need to be compressed to store the object efficiently, which takes time.

With HCL Informix 15, external Smartblobs enable developers to store the objects in a file system, while only keeping a record of the metadata. Instead of compressing the data, users can now create a special smartblob space to store the file metadata, with the object files stored externally.

External Smartblobs delivers benefits across a variety of use cases including:

  • Quality Assurance: Analyze how well a real-time monitoring system built on HCL Informix detects faulty products on an assembly line. Auditors can identify the product that was discarded in the metadata to find the image files of the faulty product without impacting the underlying application.Ā 
  • Tax Authority: Tax administrators need to capture tax returns in case they need to audit a company or individual. They can store the static tax return documents with a specific ID and access them through the HCL Informix application just by using the metadata.

Invisible Indexes: Optimize Your Queries Faster

Indexes are special data structures that improve the speed of data retrieval operations on a database table. They work similarly to an index in a book, allowing the database to find and access the data faster without having to scan every row in a table. However, not every index will be used for the queries in an application. HCL Informix 15 enables users to make certain indexes invisible when running an application to help test which indexes impact queries and which ones do not for better operational efficiency.

Invisible indexes support real-world use cases such as:

  • E-commerce Platforms often deal with large volumes of transactions and queries. Invisible indexes can be used to test and optimize query performance without disrupting the shopping experience.
  • Healthcare System databases require efficient data retrieval for patient records and research. Invisible indexes can help optimize these queries without affecting the overall system.
  • Customer Relationship Management (CRM) systems handle vast amounts of customer data. Invisible indexes can be used to improve the performance of specific queries related to customer interactions and their history.

Start Your Modernization Project With HCL Informix 15

The Actian team is ready to support you as you get started on your modernization project with HCL Informix 15.Ā 

Check out the on-demand webinar ā€œSecure Your Future with HCL InformixĀ® 15ā€ to learn more about HCL Informix 15. Also, see how your peers are using HCL Informix to modernize their applications. Plus, wait until the end to hear about our limited one-time offer.

Watch the Webinar >

Additional Resources:Ā 

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

Ā 

The post Experience Near-Unlimited Storage Capacity With HCL InformixĀ® 15 appeared first on Actian.


Read More
Author: Emily Taylor

The Essential Guide to Modernizing HCL Informix Applications (Part 1)

Welcome to the first installment of my four-part blog series on HCL InformixĀ® application modernization.

Organizations like yours face increasing pressure to modernize their legacy applications to remain competitive and meet customer needs. HCL Informix, a robust and reliable database platform, has been a cornerstone of many businesses for decades. Now, as technology advances and business needs change, HCL Informix can play a new roleā€”helping you to reevaluate and modernize your applications.

In the HCL Informix Modernization Checklist, I outline four steps to planning your modernization journey:

  1. Start building your business strategy
  2. Evaluate your existing Informix database environment
  3. Kick off your modernization project
  4. Learn, optimize, and innovate

Throughout this modernization series, we will dedicate a blog to each of these steps, delving into the strategic considerations, technical approaches, and best practices so you can get your project started on the right track.

Start building your business strategy

Establish your application modernization objectives

The initial step in any application migration and modernization project is to clearly define the business problems you are trying to solve and optimize your project planning to best serve those needs. For example, you may be facing challenges with:Ā 

  • Security and compliance
  • Stability and reliabilityĀ 
  • Performance bottlenecks and scalabilityĀ 
  • Web and modern APIs
  • Technological obsolescence
  • Cost inefficiencies

By defining these parameters, you can set a clear objective for your migration and modernization efforts. This will guide your decision-making process and help in selecting the right strategies and technologies for a successful transformation.

Envision the end result

Understanding the problem you want to address is crucial, but itā€™s equally important to develop a solution. Start by envisioning an ideal scenario. For instance, consider goals like:

  • Real-time responses
  • Scale to meet user demand
  • Update applications with zero downtime
  • Zero security incidents
  • 100% connectivity with other applications
  • Deliver the project on time and on budget
  • Complete business continuity

Track progress with key performance indicators

Set key performance indicators (KPIs) to track progress toward your goals and objectives. This keeps leadership informed and motivates the team. Some sample KPIs might look like:Ā 

kpis for hcl informix

Identify the capabilities you want to incorporate into your applications

With your vision in place, identify capabilities you wish to incorporate into your applications to help you meet your KPIs. Consider incorporating capabilities like:

  • Cloud computing
  • Third-party solutions and microservices
  • Orchestration and automation
  • DevOps practices
  • APIs for better integration

Evaluate each capability and sketch an architecture diagram to determine if existing tools meet your needs. If not, identify new services required for your modernization project.

Get Your Modernization Checklist

For more best-practice approaches to modernizing your Informix applications, download the HCL Informix Modernization Checklist and stay tuned for the next blog in the series.

Get the Checklist >

InformixĀ® is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

The post The Essential Guide to Modernizing HCL Informix Applications (Part 1) appeared first on Actian.


Read More
Author: Nick Johnson

Table Cloning: Create Instant Snapshots Without Data Duplication

What is Table Cloning?

Table Cloning is a database operation that makes a copy of an X100 table without the performance penalty of copying the underlying data. If you arrived here looking for the SQL syntax to clone a table in Actian Vector, it works like this:

CREATE TABLE newtable CLONE existingtable
[, newtable2 CLONE existingtable2, ...]
Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā  [ WITH <option, option, ...> ];

The WITH options are briefly listed here. Weā€™ll explain them in more detail later on.

WITH <option>
NODATA
Clone only the table structure, not its contents.
GRANTS
Also copy privileges from existing tables to new tables.
REFERENCES=Ā Ā Ā Ā  
     NONE
 Ā  | RESTRICTED
Ā Ā  | EXTENDED
Disable creation of references between new tables (NONE), create references between new tables to match those between existing tables (RESTRICTED, the default), or additionally enable creation of references from new tables to existing tables not being cloned (EXTENDED).

The new table ā€“ the ā€œcloneā€ ā€“ has the same contents the existing table did at the point of cloning. The main thing to remember is that the clone youā€™ve created is just a table. No more, no less. It looks exactly like a copy. The new table may subsequently be inserted into, updated, deleted from, and even dropped, without affecting the original table, and vice versa.

In developing this feature, it became common to field questions like ā€œCan you create a view on a clone?ā€ or ā€œCan you update a clone?ā€ and ā€œCan you grant privileges on a clone?ā€ The answer, in all cases, is yes. Itā€™s a table. If it helps, after you clone a table, you can simply forget that the table was created with the CLONE syntax. Thatā€™s what Vector does.

What Isnā€™t Table Cloning?

Itā€™s just as important to recognize what Table Cloning is not. You can only clone an X100 table, all its contents or none of it, within the same database. You canā€™t clone only part of a table, or clone a table between two databases.

Whatā€™s it For?

With Table Cloning, you can make inexpensive copies of an existing X100 table. This can be useful to create and persist daily snapshots of a table that changes gradually over time, for example. These snapshots can be queried like any other table.

Users can also make experimental copies of sets of tables and try out changes on them, before applying those changes to the original tables. This makes it faster for users to experiment with tables safely.

How Table Cloning Works

In X100ā€™s storage model, when a block of table data is written to storage, that block is never modified, except to be deleted when no longer required. If the tableā€™s contents are modified, a new block is written with the new data, and the tableā€™s list of storage blocks is updated to include the new block and exclude the old one.

table cloning block diagram

X100 catalog and storage for a one-column table MYTABLE, with two storage blocks.

Thereā€™s nothing to stop X100 creating a table that references another tableā€™s storage blocks, as long as we know which storage blocks are still referenced by at least one table. So thatā€™s what we do to clone a table. This allows X100 to create what looks like a copy of the table, without having to copy the underlying data.

In the image below, mytableclone references the same storage blocks as mytable does.

table cloning block diagram

X100 catalog and storage after MYTABLECLONE is created as a clone of MYTABLE.

Note that every table column, including the column in the new table, ā€œownsā€ a storage file, which is the destination file for any new storage blocks for that column. So if new rows are added to mytableclone in the diagram above, the new block will be added to its own storage file:

table cloning block diagram

X100 catalog and storage after another storage block is added to MYTABLECLONE.

X100 tables can also have in-memory updates, which are applied on top of the storage blocks when the table is scanned. These in-memory updates are not cloned, but copied. This means a table which has recently had a large number of updates might not clone instantly.

My First Clone: A Simple Example

Create a table (note that on Actian Ingres, WITH STRUCTURE=X100 is needed to ensure you get an X100 table):

CREATE TABLE mytable (c1 INT, c2 VARCHAR(10)) WITH STRUCTURE=X100;

Insert some rows into it:

INSERT INTO mytable VALUES (1, 'one'), (2, 'two'), (3, 'three'), (4, 'four'), (5, 'five');

Create a clone of this table called myclone:

CREATE TABLE myclone CLONE mytable;

The tables now have the same contents:

SELECT * FROM mytable;
c1 c2
1 one
2 two
3 three
4 four
5 five
SELECT * FROM myclone;
c1 c2
1 one
2 two
3 three
4 four
5 five

Note that there is no further relationship between the table and its clone. The two tables can be modified independently, as if youā€™d created the new table with CREATE TABLE ā€¦ AS SELECT ā€¦

UPDATE mytable SET c2 = 'trois' WHERE c1 = 3;
INSERT INTO mytable VALUES (6, 'six');
DELETE FROM myclone WHERE c1 = 1;
SELECT * FROM mytable;
c1 c2
1 one
2 two
3 trois
4 four
5 five
6 six
SELECT * FROM myclone;
c1 c2
2 two
3 three
4 four
5 five

You can even drop the original table, and the clone is unaffected:

DROP TABLE mytable;

SELECT * FROM myclone;
c1 c2
2 two
3 three
4 four
5 five

Security and Permissions

You can clone any table you have the privilege to SELECT from, even if you donā€™t own it.

When you create a table, whether by cloning or otherwise, you own it. That means you have all privileges on it, including the privilege to drop it.

By default, the privileges other people have on your newly-created clone are the same as if you created a table the normal way. If you want all the privileges other users were GRANTed on the existing table to be granted to the clone, use WITH GRANTS.

Metadata-Only Clone

The option WITH NODATA will create an empty copy of the existing table(s), but not the contents. If you do this, youā€™re not doing anything you couldnā€™t do with existing SQL, of course, but it may be easier to use the CLONE syntax to make a metadata copy of a group of tables with complicated referential relationships between them.

The WITH NODATA option is also useful on Actian Ingres 12.0. The clone functionality only works with X100 tables, but Actian Ingres 12.0 allows you to create metadata-only clones of non-X100 Ingres tables, such as heap tables.

Cloning Multiple Tables at Once

If you have a set of tables connected by foreign key relationships, you can clone them to create a set of tables connected by the same relationships, as long as you clone them all in the same statement.

For example, suppose we have the SUPPLIER, PART and PART_SUPP, defined like this:

CREATE TABLE supplier (
supplier_id INT PRIMARY KEY,
supplier_name VARCHAR(40),
supplier_address VARCHAR(200)
);

CREATE TABLE part (
part_id INT PRIMARY KEY,
part_name VARCHAR(40)
);

CREATE TABLE part_supp (
supplier_id INT REFERENCES supplier(supplier_id),
part_id INT REFERENCES part(part_id),
cost DECIMAL(6, 2)
);

If we want to clone these three tables at once, we can supply multiple pairs of tables to the clone statement:

CREATE TABLE
supplier_clone CLONE supplier,
part_clone CLONE part,
part_supp_clone CLONE part_supp;

We now have clones of the three tables. PART_SUPP_CLONE references the new tables SUPPLIER_CLONE and PART_CLONE ā€“ it does not reference the old tables PART and SUPPLIER.

Without Table Cloning, weā€™d have to create the new tables ourselves with the same definitions as the existing tables, then copy the data into the new tables, which would be further slowed by the necessary referential integrity checks. With Table Cloning, the database management system doesnā€™t have to perform an expensive referential integrity check on the new tables because their contents are the same as the existing tables, which have the same constraints.

WITH REFERENCES=NONE

Donā€™t want your clones to have references to each other? Then use WITH REFERENCES=NONE:

CREATE TABLE
supplier_clone CLONE supplier,
part_clone CLONE part,
part_supp_clone CLONE part_supp
WITH REFERENCES=NONE;

WITH REFERENCES=EXTENDED

Normally, the CLONE statement will only create references between the newly-created clones.

For example, if you only cloned PART and PART_SUPP:

CREATE TABLE
part_clone CLONE part,
part_supp_clone CLONE part_supp;

PART_SUPP_CLONE would have a foreign key reference to PART_CLONE, but not to SUPPLIER.

But what if you want all the clones you create in a statement to retain their foreign keys, even if that means referencing the original tables? You can do that if you want, using WITH REFERENCES=EXTENDED:

CREATE TABLE
part_clone CLONE part,
part_supp_clone CLONE part_supp
WITH REFERENCES=EXTENDED;

After the above SQL, PART_SUPP_CLONE would reference PART_CLONE and SUPPLIER.

Table Cloning Use Case and Real-World Benefits

The ability to clone tables opens up new use cases. For example, a large eCommerce company can use table cloning to replicate its production order database. This allows easier reporting and analytics without impacting the performance of the live system. Benefits include:

  • Reduced reporting latency. Previously, reports were generated overnight using batch ETL processes. Table cloning can create reports in near real-time, enabling faster decision-making. It can also be used to create a low-cost daily or weekly snapshot of a table which receives gradual changes.
  • Improved analyst productivity. Analysts no longer have to make a full copy of a table in order to try out modifications. They can clone the table and work on the clone instead, without having to wait for a large table copy or modifying the original.
  • Cost savings. A clone takes up no additional storage initially, because it only refers to the original tableā€™s storage blocks. New storage blocks are written only as needed when the table is modified. Table cloning would therefore reduce storage costs compared to maintaining a separate data warehouse for reporting.

This hypothetical example illustrates the potential benefits of table cloning in a real-world scenario. By implementing table cloning effectively, you can achieve significant improvements in development speed, performance, cost savings, and operational efficiency.

Create Snapshot Copies of X100 Tables

Table Cloning allows the inexpensive creation of snapshot copies of existing X100 tables. These new tables are tables in their own right, which may be modified independently of the originals.

Actian Vector 7.0, available this fall, will offer Table Cloning. Youā€™ll be able to easily create snapshots of table data at any moment, while having the ability to revert to previous states without duplicating storage. With this Table Cloning capability, youā€™ll be able to quickly test scenarios, restore data to a prior state, and reduce storage costs. Find out more.

The post Table Cloning: Create Instant Snapshots Without Data Duplication appeared first on Actian.


Read More
Author: Actian Corporation

Build an IoT Smart Farm Using Raspberry Pi and Actian Zen

Technology is changing every industry, and agriculture is no exception. The Internet of Things (IoT) and edge computing provide powerful tools to make traditional farming practices more efficient, sustainable, and data-driven. One affordable and versatile platform that can form the basis for such a smart agriculture system is the Raspberry Pi.

In this blog post, we will build a smart agriculture system using IoT devices to monitor soil moisture, temperature, and humidity levels across a farm. The goal is to optimize irrigation and ensure optimal growing conditions for crops. Weā€™ll use a Raspberry Pi running Raspbian OS, Actian Zen Edge for database management, Zen Enterprise to handle the detected anomalies on the remote server database, and Python with the Zen ODBC interface for data handling. Additionally, weā€™ll leverage AWS SNS (Simple Notification Service) to send alerts for detected anomalies in real-time for immediate action.

Prerequisites

Before we start, ensure you have the following:

  • A Raspberry Pi running Raspbian OS.
  • Python installed on your Raspberry Pi.
  • Actian Zen Edge database installed.
  • PyODBC library installed.
  • AWS SNS set up with an appropriate topic and access credentials.

Step 1: Setting Up the Raspberry Pi

First, update your Raspberry Pi and install the necessary libraries:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3-pip
pip3 install pyodbc boto3

Step 2: Install Actian Zen Edge

Follow the instructions on the Actian Zen Edge download page to download and install Actian Zen Edge on your Raspberry Pi.

Step 3: Create Tables in the Database

We need to create tables to store sensor data and anomalies. Connect to your Actian Zen Edge database and create the following table:

CREATE TABLE sensor_data (
Ā Ā Ā Ā id identity PRIMARY KEY,
Ā  Ā  timestamp DATETIME,
Ā Ā Ā Ā soil_moisture FLOAT,
Ā Ā Ā Ā temperature FLOAT,
Ā Ā Ā Ā humidity FLOAT
);

Install Zen Enterprise, connect to the central database, and create the following table:

Ā CREATE TABLE anomalies (
Ā Ā Ā Ā id identity PRIMARY KEY ,
Ā Ā Ā Ā timestamp DATETIME,
Ā Ā Ā Ā soil_moisture FLOAT,
Ā Ā Ā Ā temperature FLOAT,
Ā Ā Ā Ā humidity FLOAT,
Ā Ā Ā Ā description longvarchar
);

Step 4: Define the Python Script

Now, letā€™s write the Python script to handle sensor data insertion, anomaly detection, and alerting via AWS SNS.

Anomaly Detection Logic

Define a function to check for anomalies based on predefined thresholds:

def check_for_anomalies(data):
 Ā Ā Ā threshold = {'soil_moisture': 30.0, 'temperature': 35.0, 'humidity': 70.0}
 Ā Ā Ā anomalies = []
 Ā Ā Ā if data['soil_moisture'] < threshold['soil_moisture']:
 Ā Ā Ā Ā Ā Ā Ā anomalies.append('Low soil moisture detected')
 Ā Ā Ā if data['temperature'] > threshold['temperature']:
 Ā Ā Ā Ā Ā Ā Ā anomalies.append('High temperature detected')
 Ā Ā Ā if data['humidity'] > threshold['humidity']:
 Ā Ā Ā Ā Ā Ā Ā anomalies.append('High humidity detected')
Ā Ā Ā Ā return anomalies

Insert Sensor Data

Define a function to insert sensor data into the database:

import pyodbc

def insert_sensor_data(data):
 Ā Ā Ā conn = pyodbc.connect('Driver={Pervasive ODBC 
Interface};servername=localhost;Port=1583;serverdsn=demodata;')
 Ā Ā Ā cursor = conn.cursor()
 Ā Ā Ā cursor.execute("INSERT INTO sensor_data (timestamp, soil_moisture, temperature, humidity) VALUES (?, ?, ?, ?)",
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā (data['timestamp'], data['soil_moisture'], data['temperature'], data['humidity']))
 Ā Ā Ā conn.commit()
 Ā Ā Ā cursor.close()
Ā Ā Ā Ā conn.close()

Send Anomalies to the Remote Database

Define a function to send detected anomalies to the database:

def send_anomalies_to_server(anomaly_data):
 Ā Ā Ā conn = pyodbc.connect('Driver={Pervasive ODBC Interface};servername=<remote server>;Port=1583;serverdsn=demodata;')
 Ā Ā Ā cursor = conn.cursor()
 Ā Ā Ā cursor.execute("INSERT INTO anomalies (timestamp, soil_moisture, temperature, humidity, description) VALUES (?, ?, ?, ?, ?)",
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā (anomaly_data['timestamp'], anomaly_data['soil_moisture'], anomaly_data['temperature'], anomaly_data['humidity'], anomaly_data['description']))
 Ā Ā Ā conn.commit()
 Ā Ā Ā cursor.close()
Ā Ā Ā Ā conn.close()

Send Alerts Using AWS SNS

Define a function to send alerts using AWS SNS:

def send_alert(message):
Ā Ā Ā Ā sns_client = boto3.client('sns', aws_access_key_id='Your key ID',
 Ā Ā Ā aws_secret_access_key ='Your Access keyā€™, region_name='your-region')
 Ā Ā Ā topic_arn = 'arn:aws:sns:your-region:your-account-id:your-topic-name'
 Ā Ā Ā response = sns_client.publish(
 Ā Ā Ā Ā Ā Ā Ā TopicArn=topic_arn,
 Ā Ā Ā Ā Ā Ā Ā Message=message,
 Ā Ā Ā Ā Ā Ā Ā Subject='Anomaly Alert'
 Ā Ā Ā )
Ā Ā Ā Ā return response

Replace your-region, your-account-id, and your-topic-name with your actual AWS SNS topic details.

Step 5: Generate Sensor Data

Define a function to simulate real-world sensor data:

import random
import datetime

def generate_sensor_data():
 Ā Ā Ā return {
 Ā Ā Ā Ā Ā Ā Ā 'timestamp': datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),
 Ā Ā Ā Ā Ā Ā Ā 'soil_moisture': random.uniform(20.0, 40.0),
 Ā Ā Ā Ā Ā Ā Ā 'temperature': random.uniform(15.0, 45.0),
 Ā Ā Ā Ā Ā Ā Ā 'humidity': random.uniform(30.0, 80.0)
Ā Ā Ā Ā }

Step 6: Main Function to Simulate Data Collection and Processing

Finally, put everything together in a main function:

def main():
 Ā Ā Ā for _ in range(100):
 Ā Ā Ā Ā Ā Ā Ā sensor_data = generate_sensor_data()
 Ā Ā Ā Ā Ā Ā Ā insert_sensor_data(sensor_data)
 Ā Ā Ā Ā Ā Ā Ā anomalies = check_for_anomalies(sensor_data)
 Ā Ā Ā Ā Ā Ā Ā if anomalies:
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā anomaly_data = {
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā 'timestamp': sensor_data['timestamp'],
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā 'soil_moisture': sensor_data['soil_moisture'],
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā 'temperature': sensor_data['temperature'],
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā 'humidity': sensor_data['humidity'],
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā 'description': ', '.join(anomalies)
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā }
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā send_anomalies_to_server(anomaly_data)
 Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā send_alert(anomaly_data['description'])
if __name__ == "__main__":
Ā Ā Ā Ā main()

Conclusion

And there you have it! By following these steps, youā€™ve successfully set up a basic smart agriculture system on a Raspberry Pi using Actian Zen Edge and Python. This system, which monitors soil moisture, temperature, and humidity levels, detects anomalies, stores data in databases, and sends notifications via AWS SNS, is a scalable solution for optimizing irrigation and ensuring optimal growing conditions for crops. Now, itā€™s your turn to apply this knowledge and contribute to the future of smart agriculture.

Remember to replace placeholders with your actual AWS SNS topic details and database connection details. Happy farming!

The post Build an IoT Smart Farm Using Raspberry Pi and Actian Zen appeared first on Actian.


Read More
Author: Johnson Varughese

Data Warehousing Demystified: Your Guide From Basics to Breakthroughs

Table of contentsĀ 

Understanding the Basics

What is a Data Warehouse?

The Business Imperative of Data Warehousing

The Technical Role of Data Warehousing

Understanding the Differences: Databases, Data Warehouses, and Analytics Databases

The Human Side of Data: Key User Personas and Their Pain Points

Data Warehouse Use Cases For Modern Organizations

6 Common Business Use Cases

9 Technical Use Cases

Understanding the Basics

Welcome to data warehousing 101. For those of you who remember when ā€œcloudā€ only meant rain and ā€œbig dataā€ was just a database that ate too much, buckle upā€”weā€™ve come a long way. Hereā€™s an overview:

What is a Data Warehouse?

Data warehouses are large storage systems where data from various sources is collected, integrated, and stored for later analysis. Data warehouses are typically used in business intelligence (BI) and reporting scenarios where you need to analyze large amounts of historical and real-time data. They can be deployed on-premises, on a cloud (private or public), or in a hybrid manner.

Think of a data warehouse as the Swiss Army knife of the data world ā€“ itā€™s got everything you need, but unlike that dusty tool in your drawer, youā€™ll actually use it every day!

Prominent examples include Actian Data Platform, Amazon Redshift, Google BigQuery, Snowflake, Microsoft Azure Synapse Analytics, and IBM Db2 Warehouse, among others.

Proper data consolidation, integration, and seamless connectivity with BI tools are crucial for a data strategy and visibility into the business. A data warehouse without this holistic view provides an incomplete narrative, limiting the potential insights that can be drawn from the data.

ā€œProper data consolidation, integration, and seamless connectivity with BI tools are crucial aspects of a data strategy. A data warehouse without this holistic view provides an incomplete narrative, limiting the potential insights that can be drawn from the data.ā€

The Business Imperative of Data Warehousing

Data warehouses are instrumental in enabling organizations to make informed decisions quickly and efficiently. The primary value of a data warehouse lies in its ability to facilitate a comprehensive view of an organizationā€™s data landscape, supporting strategic business functions such as real-time decision-making, customer behavior analysis, and long-term planning.

But why is a data warehouse so crucial for modern businesses? Letā€™s dive in.

A data warehouse is a strategic layer that is essential for any organization looking to maintain competitiveness in a data-driven world. The ability to act quickly on analyzed data translates to improved operational efficiencies, better customer relationships, and enhanced profitability.

The Technical Role of Data Warehousing

The primary function of a data warehouse is to facilitate analytics, not to perform analytics itself. The BI team configures the data warehouse to align with its analytical needs. Essentially, a data warehouse acts as a structured repository, comprising tables of rows and columns of carefully curated and frequently updated data assets. These assets feed BI applications that drive analytics.

ā€œThe primary function of a data warehouse is to facilitate analytics, not to perform analytics itself.ā€

Achieving the business imperatives of data warehousing relies heavily on these four key technical capabilities:

1. Real-Time Data Processing: This is critical for applications that require immediate action, such as fraud detection systems, real-time customer interaction management, and dynamic pricing strategies. Real-time data processing in a data warehouse is like a barista making your coffee to orderā€“it happens right when you need it, tailored to your specific requirements.

2. Scalability and Performance: Modern data warehouses must handle large datasets and support complex queries efficiently. This capability is particularly vital in industries such as retail, finance, and telecommunications, where the ability to scale according to demand is necessary for maintaining operational efficiency and customer satisfaction.

3. Data Quality and Accessibility: The quality of insights directly correlates with the quality of data ingested and stored in the data warehouse. Ensuring data is accurate, clean, and easily accessible is paramount for effective analysis and reporting. Therefore, itā€™s crucial to consider the entire data chain when crafting a data strategy, rather than viewing the warehouse in isolation.

4. Advanced Capabilities: Modern data warehouses are evolving to meet new challenges and opportunities:

      • Data virtualization: Allowing queries across multiple data sources without physical data movement.
      • Integration with data lakes: Enabling analysis of both structured and unstructured data.
      • In-warehouse machine learning: Supporting the entire ML lifecycle, from model training to deployment, directly within the warehouse environment.

ā€œIn the world of data warehousing, scalability isnā€™t just about handling more dataā€”itā€™s about adapting to the ever-changing landscape of business needs.ā€

Understanding the Differences: Databases, Data Warehouses, and Analytics Databases

Databases, data warehouses, and analytics databases serve distinct purposes in the realm of data management, with each optimized for specific use cases and functionalities.

A database is a software system designed to efficiently store, manage, and retrieve structured data. It is optimized for Online Transaction Processing (OLTP), excelling at handling numerous small, discrete transactions that support day-to-day operations. Examples include MySQL, PostgreSQL, and MongoDB. While databases are adept at storing and retrieving data, they are not specifically designed for complex analytical querying and reporting.

Data warehouses, on the other hand, are specialized databases designed to store and manage large volumes of structured, historical data from multiple sources. They are optimized for analytical processing, supporting complex queries, aggregations, and reporting. Data warehouses are designed for Online Analytical Processing (OLAP), using techniques like dimensional modeling and star schemas to facilitate complex queries across large datasets. Data warehouses transform and integrate data from various operational systems into a unified, consistent format for analysis. Examples include Actian Data Platform, Amazon Redshift, Snowflake, and Google BigQuery.

Analytics databases, also known as analytical databases, are a subset of databases optimized specifically for analytical processing. They offer advanced features and capabilities for querying and analyzing large datasets, making them well-suited for business intelligence, data mining, and decision support. Analytics databases bridge the gap between traditional databases and data warehouses, offering features like columnar storage to accelerate analytical queries while maintaining some transactional capabilities. Examples include Actian Vector, Exasol, and Vertica. While analytics databases share similarities with traditional databases, they are specialized for analytical workloads and may incorporate features commonly associated with data warehouses, such as columnar storage and parallel processing.

ā€œIn the data management spectrum, databases, data warehouses, and analytics databases each play distinct roles. While all data warehouses are databases, not all databases are data warehouses. Data warehouses are specifically tailored for analytical use cases. Analytics databases bridge the gap, but arenā€™t necessarily full-fledged data warehouses, which often encompass additional components and functionalities beyond pure analytical processing.ā€

The Human Side of Data: Key User Personas and Their Pain Points

Welcome to Data Warehouse Personalities 101. No Myers-Briggs hereā€”just SQL, Python, and a dash of data-induced delirium. Letā€™s see whoā€™s who in this digital zoo.

Note: While these roles are presented distinctly, in practice they often overlap or merge, especially in organizations of varying sizes and across different industries. The following personas are illustrative, designed to highlight the diverse perspectives and challenges related to data warehousing across common roles.

  1. DBAs are responsible for the technical maintenance, security, performance, and reliability of data warehouses. ā€œAs a DBA, I need to ensure our data warehouse operates efficiently and securely, with minimal downtime, so that it consistently supports high-volume data transactions and accessibility for authorized users.ā€
  2. Data analysts specialize in processing and analyzing data to extract insights, supporting decision-making and strategic planning. ā€œAs a data analyst, I need robust data extraction and query capabilities from our data warehouse, so I can analyze large datasets accurately and swiftly to provide timely insights to our decision-makers.ā€
  3. BI analysts focus on creating visualizations, reports, and dashboards from data to directly support business intelligence activities. ā€œAs a BI analyst, I need a data warehouse that integrates seamlessly with BI tools to facilitate real-time reporting and actionable business insights.ā€
  4. Data engineers manage the technical infrastructure and architecture that supports the flow of data into and out of the data warehouse. ā€œAs a data engineer, I need to build and maintain a scalable and efficient pipeline that ensures clean, well-structured data is consistently available for analysis and reporting.ā€
  5. Data scientists use advanced analytics techniques, such as machine learning and predictive modeling, to create algorithms that predict future trends and behaviors. ā€œAs a data scientist, I need the data warehouse to handle complex data workloads and provide the computational power necessary to develop, train, and deploy sophisticated models.ā€
  6. Compliance officers ensure that data management practices comply with regulatory requirements and company policies. ā€œAs a compliance officer, I need the data warehouse to enforce data governance practices that secure sensitive information and maintain audit trails for compliance reporting.ā€
  7. IT managers oversee the IT infrastructure and ensure that technological resources meet the strategic needs of the organization. ā€œAs an IT manager, I need a data warehouse that can scale resources efficiently to meet fluctuating demands without overspending on infrastructure.ā€
  8. Risk managers focus on identifying, managing, and mitigating risks related to data security and operational continuity. ā€œAs a risk manager, I need robust disaster recovery capabilities in the data warehouse to protect critical data and ensure it is recoverable in the event of a disaster.ā€

Data Warehouse Use Cases For Modern Organizations

In this section, weā€™ll feature common use cases for both the business and IT sides of the organization.

6 Common Business Use Cases

This section highlights how data warehouses directly support critical business objectives and strategies.

1. Supply Chain and Inventory Management: Enhances supply chain visibility and inventory control by analyzing procurement, storage, and distribution data. Think of it as giving your supply chain a pair of X-ray glassesā€”suddenly, you can see through all the noise and spot exactly where that missing shipment of left-handed widgets went.

Examples:

        • Retail: Optimizing stock levels and reorder points based on sales forecasts and seasonal trends to minimize stockouts and overstock situations.
        • Manufacturing: Tracking component supplies and production schedules to ensure timely order fulfillment and reduce manufacturing delays.
        • Pharmaceuticals: Ensuring drug safety and availability by monitoring supply chains for potential disruptions and managing inventory efficiently.

2. Customer 360 Analytics: Enables a comprehensive view of customer interactions across multiple touchpoints, providing insights into customer behavior, preferences, and loyalty.

Examples:

        • Retail: Analyzing purchase history, online and in-store interactions, and customer service records to tailor marketing strategies and enhance customer experience (CX).
        • Banking: Integrating data from branches, online banking, and mobile apps to create personalized banking services and improve customer retention.
        • Telecommunications: Leveraging usage data, service interaction history, and customer feedback to optimize service offerings and improve customer satisfaction.

3. Operational Efficiency: Improves the efficiency of operations by analyzing workflows, resource allocations, and production outputs to identify bottlenecks and optimize processes. Itā€™s the business equivalent of finding the perfect traffic route to workā€”except instead of avoiding road construction, youā€™re sidestepping inefficiencies and roadblocks to productivity.

Examples:

        • Manufacturing: Monitoring production lines and supply chain data to reduce downtime and improve production rates.
        • Healthcare: Streamlining patient flow from registration to discharge to enhance patient care and optimize resource utilization.
        • Logistics: Analyzing route efficiency and warehouse operations to reduce delivery times and lower operational costs.

4. Financial Performance Analysis: Offers insights into financial health through revenue, expense, and profitability analysis, helping companies make informed financial decisions.

Examples:

        • Finance: Tracking and analyzing investment performance across different portfolios to adjust strategies according to market conditions.
        • Real Estate: Evaluating property investment returns and operating costs to guide future investments and development strategies.
        • Retail: Assessing the profitability of different store locations and product lines to optimize inventory and pricing strategies.

5. Risk Management and Compliance: Helps organizations manage risk and ensure compliance with regulations by analyzing transaction data and audit trails. Itā€™s like having a super-powered compliance officer who can spot a regulatory red flag faster than you can say ā€œGDPR.ā€

Examples:

        • Banking: Detecting patterns indicative of fraudulent activity and ensuring compliance with anti-money laundering laws.
        • Healthcare: Monitoring for compliance with healthcare standards and regulations, such as HIPAA, by analyzing patient data handling and privacy measures.
        • Energy: Assessing and managing risks related to energy production and distribution, including compliance with environmental and safety regulations.

6. Market and Sales Analysis: Analyzes market trends and sales data to inform strategic decisions about product development, marketing, and sales strategies.

Examples:

        • eCommerce: Tracking online customer behavior and sales trends to adjust marketing campaigns and product offerings in real time.
        • Automotive: Analyzing regional sales data and customer preferences to inform marketing efforts and align production with demand.
        • Entertainment: Evaluating the performance of media content across different platforms to guide future production and marketing investments.

These use cases demonstrate how data warehouses have become the backbone of data-driven decision making for organizations. Theyā€™ve evolved from mere data repositories into critical business tools.

In an era where data is often called ā€œthe new oil,ā€ data warehouses serve as the refineries, turning that raw resource into high-octane business fuel. The real power of data warehouses lies in their ability to transform vast amounts of data into actionable insights, driving strategic decisions across all levels of an organization.

9 Technical Use Cases

Ever wonder how boardroom strategies transform into digital reality? This section pulls back the curtain on the technical wizardry of data warehousing. Weā€™ll explore nine use cases that showcase how data warehouse technologies turn business visions into actionable insights and competitive advantages. From powering machine learning models to ensuring regulatory compliance, letā€™s dive into the engine room of modern data-driven decision making.

1. Data Science and Machine Learning: Data warehouses can store and process large datasets used for machine learning models and statistical analysis, providing the computational power needed for data scientists to train and deploy models.

Key features:

        1. Built-in support for machine learning algorithms and libraries (like TensorFlow).
        2. High-performance data processing capabilities for handling large datasets (like Apache Spark).
        3. Tools for deploying and monitoring machine learning models (like MLflow).

2. Data as a Service (DaaS): Companies can use cloud data warehouses to offer cleaned and curated data to external clients or internal departments, supporting various use cases across industries.

Key features:

        1. Robust data integration and transformation capabilities that ensure data accuracy and usability (using tools like Actian DataConnect, Actian Data Platform for data integration, and Talend).
        2. Multi-tenancy and secure data isolation to manage data access (features like those in Amazon Redshift).
        3. APIs for seamless data access and integration with other applications (such as RESTful APIs).
        4. Built-in data sharing tools (features like those in Snowflake).

3. Regulatory Compliance and Reporting: Many organizations use cloud data warehouses to meet compliance requirements by storing and managing access to sensitive data in a secure, auditable manner. Itā€™s like having a digital paper trail that would make even the most meticulous auditor smile. No more drowning in file cabinets!

Key features:

        1. Encryption of data at rest and in transit (technologies like AES encryption).
        2. Comprehensive audit trails and role-based access control (features like those available in Oracle Autonomous Data Warehouse).
        3. Adherence to global compliance standards like GDPR and HIPAA (using compliance frameworks such as those provided by Microsoft Azure).

4. Administration and Observability: Facilitates the management of data warehouse platforms and enhances visibility into system operations and performance. Consider it your data warehouseā€™s health monitorā€”keeping tabs on its vital signs so you can diagnose issues before they become critical.

Key features:

        1. A platform observability dashboard to monitor and manage resources, performance, and costs (as seen in Actian Data Platform, or Google Cloudā€™s operations suite).
        2. Comprehensive user access controls to ensure data security and appropriate access (features seen in Microsoft SQL Server).
        3. Real-time monitoring dashboards for live tracking of system performance (like Grafana).
        4. Log aggregation and analysis tools to streamline troubleshooting and maintenance (implemented with tools like ELK Stack).

5. Seasonal Demand Scaling: The ability to scale resources up or down based on demand makes cloud data warehouses ideal for industries with seasonal fluctuations, allowing them to handle peak data loads without permanent investments in hardware. Itā€™s like having a magical warehouse that expands during the holiday rush and shrinks during the slow season. No more paying for empty shelf space!

Key features:

        1. Semi-automatic or fully automatic resource allocation for handling variable workloads (like Actian Data Platformā€™s scaling and Schedules feature, or Google BigQueryā€™s automatic scaling).
        2. Cloud-based scalability options that provide elasticity and cost efficiency (as seen in AWS Redshift).
        3. Distributed architecture that allows horizontal scaling (such as Apache Hadoop).

6. Enhanced Performance and Lower Costs: Modern data warehouses are engineered to provide superior performance in data processing and analytics, while simultaneously reducing the costs associated with data management and operations. Imagine a race car that not only goes faster but also uses less fuel. Thatā€™s what weā€™re talking about hereā€”speed and efficiency in perfect harmony.

Key features:

        1. Advanced query optimizers that adjust query execution strategies based on data size and complexity (like Oracleā€™s Query Optimizer).
        2. In-memory processing to accelerate data access and analysis (such as SAP HANA).
        3. Caching mechanisms to reduce load times for frequently accessed data (implemented in systems like Redis).
        4. Data compression mechanisms to reduce the storage footprint of data, which not only saves on storage costs but also improves query performance by minimizing the amount of data that needs to be read from disk (like the advanced compression techniques in Amazon Redshift).

7. Disaster Recovery: Cloud data warehouses often feature built-in redundancy and backup capabilities, ensuring data is secure and recoverable in the event of a disaster. Think of it as your dataā€™s insurance policyā€”when disaster strikes, youā€™re not left empty-handed.

Key features:

        1. Redundancy and data replication across geographically dispersed data centers (like those offered by IBM Db2 Warehouse).
        2. Automated backup processes and quick data restoration capabilities (like the features in Snowflake).
        3. High availability configurations to minimize downtime (such as VMwareā€™s HA solutions).

Note: The following use cases are typically driven by separate solutions, but are core to an organizationā€™s warehousing strategy.

8. (Depends on) Data Consolidation and Integration: By consolidating data from diverse sources like CRM and ERP systems into a unified repository, data warehouses facilitate a comprehensive view of business operations, enhancing analysis and strategic planning.

Key features:

          1. ETL and ELT capabilities to process and integrate diverse data (using platforms like Actian Data Platform or Informatica).
          2. Support for multiple data formats and sources, enhancing data accessibility (capabilities seen in Actian Data Platform or SAP Data Warehouse Cloud).
          3. Data quality tools that clean and validate data (like tools provided by Dataiku).

9. (Facilitates) Business Intelligence: Data warehouses support complex data queries and are integral in generating insightful reports and dashboards, which are crucial for making informed business decisions. Consider this the grand finale where all your data prep work pays offā€”transforming raw numbers into visual stories that even the most data-phobic executive can understand.

Key features:

          1. Integration with leading BI tools for real-time analytics and reporting (like Tableau).
          2. Data visualization tools and dashboard capabilities to present actionable insights (such as those in Snowflake and Power BI).
          3. Advanced query optimization for fast and efficient data retrieval (using technologies like SQL Server Analysis Services).

The technical capabilities weā€™ve discussed showcase how modern data warehouses are breaking down silos and bridging gaps across organizations. Theyā€™re not just tech tools; theyā€™re catalysts for business transformation. In a world where data is the new currency, a well-implemented data warehouse can be your organizationā€™s most valuable investment.

However, as data warehouses grow in power and complexity, many organizations find themselves grappling with a new challenge: managing an increasingly intricate data ecosystem. Multiple vendors, disparate systems, and complex data pipelines can turn what should be a transformative asset into a resource-draining headache.

ā€œIn todayā€™s data-driven world, companies need a unified solution that simplifies their data operations. Actian Data Platform offers an all-in-one approach, combining data integration, data quality, and data warehousing, eliminating the need for multiple vendors and complex data pipelines.ā€

This is where Actian Data Platform shines, offering an all-in-one solution that combines data integration, data quality, and data warehousing capabilities. By unifying these core data processes into a single, cohesive platform, Actian eliminates the need for multiple vendors and simplifies data operations. Organizations can now focus on what truly mattersā€”leveraging data for strategic insights and decision-making, rather than getting bogged down in managing complex data infrastructure.

As we look to the future, the organizations that will thrive are those that can most effectively turn data into actionable insights. With solutions like Actian Data Platform, businesses can truly capitalize on their data warehouse investment, driving meaningful transformation without the traditional complexities of data management.

Experience the data platform for yourself with a custom demo.

The post Data Warehousing Demystified: Your Guide From Basics to Breakthroughs appeared first on Actian.


Read More
Author: Fenil Dedhia

GenAI at the Edge: The Power of TinyML and Embedded Databases

The convergence of artificial intelligence (AI) and edge computing is ushering in a new era of intelligent applications. At the heart of this transformation lies GenAI (Generative AI), which is rapidly evolving to meet the demands of real-time decision-making and data privacy. TinyML, a subset of machine learning that focuses on running models on microcontrollers, and embedded databases, which store data locally on devices, are key enablers of GenAI at the edge.

This blog delves into the potential of combining TinyML and embedded databases to create intelligent edge applications. We will explore the challenges and opportunities, as well as the potential impact on various industries.

Understanding GenAI, TinyML, and Embedded Databases

GenAI is a branch of AI that involves creating new content, such as text, images, or code. Unlike traditional AI models that analyze data, GenAI models generate new data based on the patterns they have learned.

TinyML is the process of optimizing machine learning models to run on resource-constrained devices like microcontrollers. These models are typically small, efficient, and capable of performing tasks like image classification, speech recognition, and sensor data analysis.

Embedded databases are databases designed to run on resource-constrained devices, such as microcontrollers and embedded systems. They are optimized for low power consumption, fast access times, and small memory footprints.

The Power of GenAI at the Edge

The integration of GenAI with TinyML and embedded databases presents a compelling value proposition:

  • Real-time processing: By running large language models (LLMs) at the edge, data can be processed locally, reducing latency and enabling real-time decision-making.
  • Enhanced privacy: Sensitive data can be processed and analyzed on-device, minimizing the risk of data breaches and ensuring compliance with privacy regulations.
  • Reduced bandwidth consumption: Offloading data processing to the edge can significantly reduce network traffic, leading to cost savings and improved network performance.

Technical Considerations

To successfully implement GenAI at the edge, several technical challenges must be addressed:

  • Model optimization: LLMs are often computationally intensive and require significant resources. Techniques such as quantization, pruning, and knowledge distillation can be used to optimize models for deployment on resource-constrained devices.
  • Embedded database selection: The choice of embedded database is crucial for efficient data storage and retrieval. Factors to consider include database footprint, performance, and capabilities such as multi-model support.
  • Power management: Optimize power consumption to prolong battery life and ensure reliable operation in battery-powered devices.
  • Security: Implement robust security measures to protect sensitive data and prevent unauthorized access to the machine learning models and embedded database

A Case Study: Edge-Based Predictive Maintenance

Consider a manufacturing facility equipped with sensors that monitor the health of critical equipment. By deploying GenAI models and embedded databases at the edge, the facility can:

  1. Collect sensor data: Sensors continuously monitor equipment parameters such as temperature, vibration, and power consumption.
  2. Process data locally: GenAI models analyze the sensor data in real-time to identify patterns and anomalies that indicate potential equipment failures.
  3. Trigger alerts: When anomalies are detected, the system can trigger alerts to notify maintenance personnel.
  4. Optimize maintenance schedules: By predicting equipment failures, maintenance can be scheduled proactively, reducing downtime and improving overall efficiency.

The Future of GenAI at the Edge

As technology continues to evolve, we can expect to see even more innovative applications of GenAI at the edge. Advances in hardware, software, and algorithms will enable smaller, more powerful devices to run increasingly complex GenAI models. This will unlock new possibilities for edge-based AI, from personalized experiences to autonomous systems.

In conclusion, the integration of GenAI, TinyML, and embedded databases represents a significant step forward in the field of edge computing. By leveraging the power of AI at the edge, we can create intelligent, autonomous, and privacy-preserving applications.Ā 

At Actian, we help organizations run faster, smarter applications on edge devices with our lightweight, embedded database ā€“ Actian Zen. Optimized for embedded systems and edge computing, Zen boasts small-footprint with fast read and write access, making it ideal for resource-constrained environments.

Additional Resources:

The post GenAI at the Edge: The Power of TinyML and Embedded Databases appeared first on Actian.


Read More
Author: Kunal Shah

A Day in the Life of an Application Owner

The role of an application owner is often misunderstood within businesses. This confusion arises because, depending on the companyā€™s size, an application owner could be the CIO or CTO at a smaller startup, or a product management lead at a larger technology company. Despite the variation in titles, the core responsibilities remain the same: managing an entire application from top to bottom, ensuring it meets the businessā€™s needs (whether itā€™s an internal or customer-facing application), and doing so cost-effectively.

Being an application owner is a dynamic and multifaceted role that requires a blend of technical expertise, strategic thinking, and excellent communication skills. Hereā€™s a glimpse into a typical day in the life of an application owner.

Morning: Planning and Prioritizing

6:30 AM ā€“ 7:30 AM: Start the Day RightĀ 

The day begins early with a cup of coffee and a quick review of emails and messages. This is the time to catch up on any overnight developments, urgent issues, or updates from global teams.

7:30 AM ā€“ 8:30 AM: Daily Stand-Up MeetingĀ 

The first official task is the daily stand-up meeting with the development team. This meeting is crucial for understanding the current status of ongoing projects, identifying any roadblocks, and setting priorities for the day. Itā€™s also an opportunity to align the teamā€™s efforts with the overall business goals and discuss any new application needs.

Mid-Morning: Deep Dive into Projects

8:30 AM ā€“ 10:00 AM: Project Reviews and Code ReviewsĀ 

After the stand-up, itā€™s time to dive into project reviews. This involves going through the latest code commits, reviewing progress on key features, and ensuring that everything is on track, and if itā€™s not, create a strategy to address the issues. Code reviews are essential to maintain the quality and integrity of the application.

10:00 AM ā€“ 11:00 AM: Stakeholder MeetingsĀ 

Next up are meetings with stakeholders. These could be product managers, business analysts, or even end-users. The goal is to gather feedback, discuss new requirements, and ensure that the application is meeting the needs of the business.

Late Morning: Problem Solving and Innovation

11:00 AM ā€“ 12:00 PM: Troubleshooting and Bug FixesĀ 

No day is complete without some troubleshooting. This hour is dedicated to addressing any critical issues or bugs that have been reported. Itā€™s a time for quick thinking and problem-solving to ensure minimal disruption to users.

12:00 PM ā€“ 1:00 PM: Lunch Break and NetworkingĀ 

Lunch is not just a break but also an opportunity to network with colleagues, discuss ideas, and sometimes even brainstorm solutions to ongoing challenges.Ā 

Afternoon: Strategic Planning and Development

1:00 PM ā€“ 2:30 PM: Strategic PlanningĀ 

The afternoon kicks off with strategic planning sessions. These involve working on the applicationā€™s roadmap, planning future releases, incorporating customer input, and aligning with the companyā€™s long-term vision. Itā€™s a time to think big and set the direction for the future.

2:30 PM ā€“ 4:00 PM: Development TimeĀ 

This is the time to get hands-on with development. Whether itā€™s coding new features, optimizing existing ones, or experimenting with new technologies, this block is dedicated to building and improving the application.

Late Afternoon: Collaboration and Wrap-Up

4:00 PM ā€“ 5:00 PM: Cross-Functional Team StandupĀ 

Collaboration is key to the success of any application. This hour is spent working with cross-functional teams such as sales, UX/UI designers, and marketing to analyze and improve the product onboarding experience. The goal is to ensure that everyone is aligned and working toward the same objectives.

5:00 PM ā€“ 6:00 PM: End-of-Day Review and Planning for TomorrowĀ 

The day wraps up with a review of what was accomplished and planning for the next day. This involves updating task boards, setting priorities, and making sure that everything is in place for a smooth start the next morning.

Evening: Continuous Learning and Relaxation

6:00 PM Onwards: Continuous Learning and Personal TimeĀ 

After a productive day, itā€™s important to unwind and relax. However, the learning never stops. Many application owners spend their evenings reading up on the latest industry trends, taking online courses, or experimenting with new tools and technologies.

Being an application owner is a challenging yet rewarding role. It requires a balance of technical skills, strategic thinking, and effective communication. Every day brings new challenges, opportunities, and rewards, making it an exciting career for those who love to innovate and drive change.

If you need help managing your applications, Actian Application Services can help.Ā 

>> Learn More

The post A Day in the Life of an Application Owner appeared first on Actian.


Read More
Author: Nick Johnson

The Rise of Embedded Databases in the Age of IoT

The Internet of Things (IoT) is rapidly transforming our world. From smart homes and wearables to industrial automation and connected vehicles, billions of devices are now collecting and generating data. According to a recent analysis, the number of Internet of Things (IoT) devices worldwide is forecasted to almost double from 15.1 billion in 2020 to more than 29 billion IoT devices in 2030. This data deluge presents both challenges and opportunities, and at the heart of it all lies the need for efficient data storage and management ā€“ a role increasingly filled by embedded databases.

Traditional Databases vs. Embedded Databases

Traditional databases, designed for large-scale enterprise applications, often struggle in the resource-constrained environment of the IoT. They require significant processing power, memory, and storage, which are luxuries most IoT devices simply donā€™t have. Additionally, traditional databases are complex to manage and secure, making them unsuitable for the often-unattended nature of IoT deployments.

Embedded databases, on the other hand, are specifically designed for devices with limited resources. They are lightweight, have a small footprint, and require minimal processing power. They are also optimized for real-time data processing, crucial for many IoT applications where decisions need to be made at the edge, without relaying data to a cloud database.

Why Embedded Databases are Perfect for IoT and Edge Computing

Several key factors make embedded databases the ideal choice for IoT and edge computing:

  • Small Footprint: Embedded databases require minimal storage and memory, making them ideal for devices with limited resources. This allows for smaller form factors and lower costs for IoT devices.
  • Low Power Consumption: Embedded databases are designed to be energy-efficient, minimizing the power drain on battery-powered devices, a critical concern for many IoT applications.
  • Fast Performance: Real-time data processing is essential for many IoT applications. Embedded databases are optimized for speed, ensuring timely data storage, retrieval, and analysis at the edge.
  • Reliability and Durability: IoT devices often operate in harsh environments. Embedded databases are designed to be reliable and durable, ensuring data integrity even in case of power failures or device malfunctions.
  • Security: Security is paramount in the IoT landscape. Embedded databases incorporate robust security features to protect sensitive data from unauthorized access.
  • Ease of Use: Unlike traditional databases, embedded databases are designed to be easy to set up and manage. This simplifies development and deployment for resource-constrained IoT projects.

Building complex IoT apps shouldnā€™t be a headache. Let us show you how our embedded edge database can simplify your next IoT project.

Benefits of Using Embedded Databases in IoT Applications

The advantages of using embedded databases in IoT applications are numerous:

  • Improved Decision-Making: By storing and analyzing data locally, embedded databases enable real-time decision making at the edge. This reduces reliance on cloud communication and allows for faster, more efficient responses.
  • Enhanced Functionality: Embedded databases can store device configuration settings, user preferences, and historical data, enabling richer functionality and a more personalized user experience.
  • Reduced Latency: Processing data locally eliminates the need for constant communication with the cloud, significantly reducing latency and improving responsiveness.
  • Offline Functionality: Embedded databases allow devices to function even when disconnected from the internet, ensuring uninterrupted operation and data collection.
  • Cost Savings: By reducing reliance on cloud storage and processing, embedded databases can help lower overall operational costs for IoT deployments.

Use Cases for Embedded Databases in IoT

Embedded databases are finding applications across a wide range of IoT sectors, including:

  • Smart Homes: Embedded databases can store device settings, energy usage data, and user preferences, enabling intelligent home automation and energy management.
  • Wearables: Fitness trackers and smartwatches use embedded databases to store health data, activity logs, and user settings.
  • Industrial Automation: Embedded databases play a crucial role in industrial IoT applications, storing sensor data, equipment settings, and maintenance logs for predictive maintenance and improved operational efficiency.
  • Connected Vehicles: Embedded databases are essential for connected car applications, storing vehicle diagnostics, driver preferences, and real-time traffic data to enable features like self-driving cars and intelligent navigation systems.
  • Asset Tracking: Embedded databases can be used to track the location and condition of assets in real-time, optimizing logistics and supply chain management.

The Future of Embedded Databases in the IoT

As the IoT landscape continues to evolve, embedded databases are expected to play an even more critical role. Here are some key trends to watch:

  • Increased Demand for Scalability: As the number of connected devices explodes, embedded databases will need to be scalable to handle larger data volumes and more complex workloads.
  • Enhanced Security Features: With growing security concerns in the IoT, embedded databases will need to incorporate even more robust security measures to protect sensitive data.
  • Cloud Integration: While embedded databases enable edge computing, there will likely be a need for seamless integration with cloud platforms for data analytics, visualization, and long-term storage.

The rise of the IoT has ushered in a new era for embedded databases. Their small footprint, efficiency, and scalability make them the perfect fit for managing data at the edge of the network. As the IoT landscape matures, embedded databases will continue to evolve, offering advanced features, enhanced security, and a seamless integration with cloud platforms.

At Actian, we help organizations run faster, smarter applications on edge devices with our lightweight, embedded database ā€“ Actian Zen. And, with the latest release of Zen 16.0, we are committed to helping businesses simplify edge-to-cloud data management, boost developer productivity and build secure, distributed IoT applications.

Additional Resources:

The post The Rise of Embedded Databases in the Age of IoT appeared first on Actian.


Read More
Author: Kunal Shah

Actian Ingres 12.0 Enhances Cloud Flexibility, Improves Security, and Offers up to 20% Faster Analytics

Today, we are excited to announce Actian Ingres 12.0*, which is designed to make cloud deployment simpler, enhance security, and deliver up to 20% faster analytics. The first release I worked on was Ingres 6.4/02 back in 1992 and the first bug I fixed was for a major US car manufacturer that used Ingres to drive its production line. It gives me great pride to see that three decades later, Ingres continues to manage some of the worldā€™s most mission-critical data deployments and that thereā€™s so much affection for the Ingres product.

With this release, weā€™re returning to the much-loved Ingres brand for all platforms. We continue to partner with our customers to understand their evolving business needs, and make sure that we deliver products that enable their modernization journey. With this new release, we focused on the following capabilities:

  • Backup to cloud and disaster recovery. Ingres 12.0 greatly simplifies these configurations for both on-premises and cloud deployments through the use of Virtual Machines (VMs) or Docker containers in Kubernetes.
  • Fortified protection automatically enables AES-256 encryption and hardened security to defend against brute force and Denial of Service (DoS) attacks.
  • Improved performance and workload management with up to 20% faster analytical queries using the X100 engine. Workload Manager 2.0 provides greater flexibility in allocation of resources to meet specific user demand.
  • Elevated developer experiences in OpenROAD 12. We make it quick and easy to create and transform database-centric applications for web and mobile environments.

These new capabilities, coupled with our previous enhancements to cloud deployment, are designed to help our customers deliver on their modernization goals. They reflect Actianā€™s vision to develop solutions that our customers can trust, are flexible to meet their specific needs, and are easy-to-use so they can thrive when uncertainty is the only certainty they can plan for.

Customers like Lufthansa Systems rely on Actian Ingres to power their Lido flight and route planning software. ā€œItā€™s very reassuring to know that our solution, which keeps airplanes and passengers safe, is backed up by a database that has for so many years been playing in the ā€˜premier leagueā€™,ā€ said Rudi Koffer, Senior Database Software Architect at the Lufthansa Systems Airlines Operations Solutions division in Frankfurt Raunheim, Germany.

Experience the new capabilities first-hand. Connect with an Actian representative to get started. Below we dive into what each capability delivers.

A Database Built for Your Modernization Journey

Backup to Cloud and Disaster Recovery

Most businesses today have 24Ɨ7 data operations, so a system outage can have serious consequences. With Ingres 12.0 weā€™ve added new backup functionality to cloud and disaster recovery capabilities to dramatically reduce the risk of application downtime and data loss with a new component called IngresSync. IngresSync makes copies of a database to a target location for offsite storage and quick restoration.

Disaster recovery is now Docker or Kubernetes container-ready for Ingres 12.0 customers, allowing users to set up a read-only standby server in their Kubernetes deployment. Recovery Point Objectives are in the order of minutes and are user configurable.

Actian Ingres 12.0 Process to Disaster Recovery
Backup to cloud and disaster recovery are imperative for situations like:

  • Natural disasters: When a natural disaster such as a hurricane or earthquake strikes a local datacenter, cloud backups ensure that a copy of the data is readily available, and an environment can be spun up quickly in the cloud of your choosing to resume business operations.
  • Cyberattacks: In the event of a cyberattack such as ransomware, having cloud backups and a disaster recovery plan are essential to establish a non-compromised version of the database in a protected cloud environment.

Fortified Protection

Actian Ingres 12.0 enables AES-256 bit encryption on data in motion by default. AES-256 bit is considered one of the most secure encryption standards available today and is widely used to protect sensitive data. The 256-bit key size makes it extremely resistant to attacks and is often used by governments and highly regulated industries like banking and healthcare.

In addition, Actian Ingres 12.0 offers user-protected privileges and containerized User Defined Functions (UDFs). These UDFs, which can be authored in SQL, JavaScript, or Python, safeguard against unauthorized activities within the companyā€™s firewall that may target the database directly. Containerization of UDFs further enhances security by isolating user operations from core database management system (DBMS) processes.

Improved Performance and Workload Automation

Actian Ingres 12.0 customers can increase resource efficiency on transactional and analytic workloads in the same database. Workload Manager 2.0 enhances the data management experience with priority-driven queues, enabling the system to allocate resources based on predefined priorities and user roles. Now database administrators can define role-types such as DBAs, application developers, and end users, and assign a priority for each role-type.

The X100 engine, included with Ingres on Linux and Windows, brings efficiency improvements such as table cloning for x100 tables that allow customers to conduct projects or experiments in isolation from core DBMS operations.

Our Performance Engineering Team has determined that for analytics workloads, these enhancements make Actian Ingres 12.0 the fastest Ingres version yet with a 20% improvement over prior versions. Transactional workloads see improved release over release performance.

Elevated Developer Experiences

Actian OpenROAD 12.0, the latest update to the Ingres graphical 4GL, also sees some new enhancements designed to assist customers on their modernization journey.Ā  Surprisingly or not, we still have customers with forms-based applications and while many argue that these are the fastest and most reliable apps for data-entry, our customers want to deliver more modern versions of these apps mostly on tablet style devices. To facilitate this modernization and to protect the decades of investments in business logic, we have delivered enhanced versions of abf2or and WebGen in OpenROAD 12.0.

Additionally, OpenROAD users will benefit from the new gRPC-based architecture, which streamlines administration, bolsters concurrency support, and offers a more efficient framework, thanks to HTTP/2 and protocol buffers. The gRPC design is optimized for microservices and can be neatly packaged within a distinct container for deployment. The introduction of a newly distributed Docker file lays the groundwork for cloud deployment, providing production-ready business logic ready for integration with any modern client.

Leading Database Modernization and Innovation

These latest innovations join our recent milestones to solidify Actianā€™s position as a data and analytics leader. These achievements build on recent recognitions, including:

With this momentum, we are ready to accelerate solutions that our customers can trust, are flexible to their needs, and are easy-to-use.

Get hands-on with the new capabilities today. Connect with an Actian representative to get started.

Ā 

Ā 

*Actian Ingres includes the product formerly known as Actian X.

The post Actian Ingres 12.0 Enhances Cloud Flexibility, Improves Security, and Offers up to 20% Faster Analytics appeared first on Actian.


Read More
Author: Emma McGrattan

Types of Databases, Pros & Cons, and Real-World Examples

Databases are the unsung heroes behind nearly every digital interaction, powering applications, enabling insights, and driving business decisions. They provide a structured and efficient way to store vast amounts of data. Unlike traditional file storage systems, databases allow for the organization of data into tables, rows, and columns, making it easy to retrieve and manage information. This structured approach coupled with data governance best practices ensures data integrity, reduces redundancy, and enhances the ability to perform complex queries. Whether itā€™s handling customer information, financial transactions, inventory levels, or user preferences, databases underpin the functionality and performance of applications across industries.

Ā 

Types of Information Stored in Databases


Telecommunications: Verizon
Verizon uses databases to manage its vast network infrastructure, monitor service performance, and analyze customer data. This enables the company to optimize network operations, quickly resolve service issues, and offer personalized customer support. By leveraging database technology, Verizon can maintain a high level of service quality and customer satisfaction.

Ā 

E-commerce: Amazon
Amazon relies heavily on databases to manage its vast inventory, process millions of transactions, and personalize customer experiences. The companyā€™s sophisticated database systems enable it to recommend products, optimize delivery routes, and manage inventory levels in real-time, ensuring a seamless shopping experience for customers.

Ā 

Finance: JPMorgan Chase
JPMorgan Chase uses databases to analyze financial markets, assess risk, and manage customer accounts. By leveraging advanced database technologies, the bank can perform complex financial analyses, detect fraudulent activities, and ensure regulatory compliance, maintaining its position as a leader in the financial industry.

Ā 

Healthcare: Mayo Clinic
Mayo Clinic utilizes databases to store and analyze patient records, research data, and treatment outcomes. This data-driven approach allows the clinic to provide personalized care, conduct cutting-edge research, and improve patient outcomes. By integrating data from various sources, Mayo Clinic can deliver high-quality healthcare services and advance medical knowledge.

Ā 

Types of Databases


The choice between relational and non-relational databases depends on the specific requirements of your application. Relational databases are ideal for scenarios requiring strong data integrity, complex queries, and structured data. In contrast, non-relational databases excel in scalability, flexibility, and handling diverse data types, making them suitable for big data, real-time analytics, and content management applications.

Types of databases: Relational databases and non-relational databases

Image ā“’ Existek

1. Relational Databases


Strengths

Structured Data: Ideal for storing structured data with predefined schemas
ACID Compliance: Ensures transactions are atomic, consistent, isolated, and durable (ACID)
SQL Support: Widely used and supported SQL for querying and managing data

Ā 

Limitations

Scalability: Can struggle with horizontal scaling
Flexibility: Less suited for unstructured or semi-structured data

Ā 

Common Use Cases

Transactional Systems: Banking, e-commerce, and order management
Enterprise Applications: Customer Relationship Management (CRM) and Enterprise Resource Planning (ERP) systems

Ā 

Real-World Examples of Relational Databases

  • MySQL: Widely used in web applications like WordPress
  • PostgreSQL: Used by organizations like Instagram for complex queries and data integrity
  • Oracle Database: Powers large-scale enterprise applications in finance and government sectors
  • Actian Ingres: Widely used by enterprises and public sector like the Republic of Ireland

2. NoSQL Databases


Strengths

Scalability: Designed for horizontal scaling
Flexibility: Ideal for handling large volumes of unstructured and semi-structured data
Performance: Optimized for high-speed read/write operations

Ā 

Limitations

Consistency: Some NoSQL databases sacrifice consistency for availability and partition tolerance (CAP theorem)
Complexity: Can require more complex data modeling and application logic
Common Use Cases

Big Data Applications: Real-time analytics, IoT data storage
Content Management: Storing and serving large volumes of user-generated content

Ā 

Real-World Examples of NoSQL Databases

  • MongoDB: Used by companies like eBay for its flexibility and scalability
  • Cassandra: Employed by Netflix for handling massive amounts of streaming data
  • Redis: Utilized by X (formerly Twitter) for real-time analytics and caching
  • Actian Zen: Embedded database built for IoT and the intelligent edge. Used by 13,000+ companies
  • HCL Informix: Small footprint and self-managing. Widely used in financial services, logistics, and retail
  • Actian NoSQL: Object-oriented database used by the European Space Agency (ESA)

3. In-Memory Databases


Strengths
Speed: Extremely fast read/write operations due to in-memory storage
Low Latency: Ideal for applications requiring rapid data access

Ā 

Limitations

Cost: High memory costs compared to disk storage
Durability: Data can be lost if not backed up properly

Ā 

Common Use Cases

Real-Time Analytics: Financial trading platforms, fraud detection systems
Caching: Accelerating web applications by storing frequently accessed data

Ā 

Real-World Examples of In-Memory Databases

  • Redis: Used by GitHub to manage session storage and caching
  • SAP HANA: Powers real-time business applications and analytics
  • Actian Vector: One of the worldā€™s fastest columnar databases for OLAP workload

Combinations of two or more database models are often developed to address specific use cases or requirements that cannot be fully met by a single type alone. Actian Vector blends OLAP principles, relational database functionality, and in-memory processing, enabling accelerated query performance for real-time analysis of large datasets. The resulting capability showcases the technical versatility of modern database platforms.

Ā 

4. Graph Databases


Strengths

Relationships: Optimized for storing and querying relationships between entities
Flexibility: Handles complex data structures and connections

Ā 

Limitations

Complexity: Requires understanding of graph theory and specialized query languages
Scalability: Can be challenging to scale horizontally

Ā 

Common Use Cases

Social Networks: Managing user connections and interactions
Recommendation Engines: Suggesting products or content based on user behavior

Ā 

Real-World Examples of Graph Databases

  • Neo4j: Used by LinkedIn to manage and analyze connections and recommendations
  • Amazon Neptune: Supports Amazonā€™s personalized recommendation systems

Factors to Consider in Database Selection


Selecting the right database involves evaluating multiple factors to ensure it meets the specific needs of your applications and organization. As organizations continue to navigate the digital landscape, investing in the right database technology will be crucial for sustaining growth and achieving long-term success. Here are some considerations:

Ā 

1. Data Structure and Type

Structured vs. Unstructured: Choose relational databases for structured data and NoSQL for unstructured or semi-structured data.
Complex Relationships: Opt for graph databases if your application heavily relies on relationships between data points.

Ā 

2. Scalability Requirements

Vertical vs. Horizontal Scaling: Consider NoSQL databases for applications needing horizontal scalability.
Future Growth: For growing data needs, cloud-based databases offer scalable solutions.

Ā 

3. Performance Needs

Latency: In-memory databases are ideal for applications requiring high-speed transactions, real-time data access, and low-latency access.
Throughput: High-throughput applications may benefit from NoSQL databases.

Ā 

4. Consistency and Transaction Needs

ACID Compliance: If your application requires strict transaction guarantees, a relational database might be the best choice.
Eventual Consistency: NoSQL databases often provide eventual consistency, suitable for applications where immediate consistency is not critical.

Ā 

5. Cost Considerations
Budget: Factor in both initial setup costs and ongoing licensing, maintenance, and support.
Resource Requirements: Consider the hardware and storage costs associated with different database types.

Ā 

6. Ecosystem and Support

Community and Vendor Support: Evaluate the availability of support, documentation, and community resources.
Integration: Ensure that the database can integrate seamlessly with your existing systems and applications.

Databases are foundational to modern digital infrastructure. By leveraging the right database for the right use case, organizations can meet their specific needs and leverage data as a strategic asset. In the end, the goal is not just to store data but to harness its full potential to gain a competitive edge.

The post Types of Databases, Pros & Cons, and Real-World Examples appeared first on Actian.


Read More
Author: Dee Radh

Why Total Cost of Ownership Is a Critical Metric in High-Availability Databases


In the world of data management, the focus often zeroes in on the performance, scalability, and reliability of database systems. Total cost of ownership (TCO) is a crucial aspect that should hold equal ā€“ if not more ā€“ importance. TCO isnā€™t just a financial metric; itā€™s a comprehensive assessment that can significantly impact a businessā€™s [ā€¦]

The post Why Total Cost of Ownership Is a Critical Metric in High-Availability Databases appeared first on DATAVERSITY.


Read More
Author: Eero Teerikorpi

How to Easily Add Modern User Interfaces to Your Database Applications

Modernizing legacy database applications brings all the advantages of the cloud alongside benefits such as faster development, user experience optimization, staff efficiency, stronger security and compliance, and improved interoperability.Ā In my first blog on legacy application modernization with OpenROAD, a rapid database application development tool, I drilled into the many ways it makes it easier to modernize applications with low risk by retaining your existing business logic. However, thereā€™s still another big part of the legacy modernization journey, the user experience.

Users expect modern, intuitive interfaces with rich features and responsive design. Legacy applications often lack these qualities, which can often require significant redesign and redevelopment during application modernization to meet modern user experience expectations. Not so with OpenROAD! It simplifies the process of creating modern, visually appealing user interfaces by providing developers with a range of tools and features discussed below.

The abf2or Migration Utility

The abf2or migration utility modernizes Application-By-Forms (ABF) applications to OpenROAD frames, including form layout, controls, properties, and event handlers. It migrates business logic implemented in ABF scripts to equivalent logic in OpenROAD. This may involve translating script code and ensuring compatibility with OpenROADā€™s scripting language. The utility also handles the migration of data sources to ensure that data connections and queries function properly and can convert report definitions.

WebGen

WebGen is an OpenROAD utility that lets you quickly generate web and mobile applications in HTML5 and JavaScript from OpenROAD frames allowing OpenROAD applications to deployed on-line and on mobile devices.ā€Æā€ÆĀ Ā 

OpenROAD and Workbench IDEā€Æ

The OpenROAD Workbench Integrated Development Environment (IDE) is a comprehensive toolset for software development, particularly for creating and maintaining applications built using the OpenROAD framework. It provides tools specifically designed to migrate partitioned ABF applications to OpenROAD frames. Developers can then use the IDEā€™s visual design tools to further refine and customize the programs.Ā Ā Ā 

Platform and Device Compatibility

Multiple platform support, including Windows and Linux, lets developers create user interfaces that can run seamlessly across different operating systems without significant modification. Developers can deliver applications to a desktop or place them on a web server for web browser access; OpenROAD installs them automatically if not already installed. The runtime for Windows Mobile enables deploying OpenROAD applications toā€Æmobile phones and Pocket PC devices.

Visual Development Environment

OpenROAD provides a visual development environment where developers can design user interface components using drag-and-drop tools, visual editors, and wizards. This makes it easier for developers to create complex user interface layouts without writing extensive code manually.ā€ÆĀ Ā 

Component Library

OpenROAD offers a rich library of pre-built user interface components, such as buttons, menus, dialog boxes, and data grids. Developers can easily customize and integrate these components into applications, saving time and user interface design effort.

Integration with Modern Technologies

Integration with modern technologies and frameworks such as HTML5, CSS3, and JavaScript allows developers to incorporate modern user interface design principles, such as responsive design and animations, into their applications.

Scalability and Performance

OpenROAD delivers scalable and high-performance user interfaces capable of handling large volumes of data and complex interactions. It optimizes resource utilization and minimizes latency, ensuring a smooth and responsive user experience.

Modernize Your OpenROAD applications

Your legacy database applications may be stable, but most will not meet the expectations of users who want modern user interfaces. You donā€™t have to settle for the status quo. OpenROAD makes it easy to deliver what your users are asking for with migration tools to convert older interfaces, visual design tools, support for web and mobile application development, an extensive library of pre-built user interface components, and much more.

The post How to Easily Add Modern User Interfaces to Your Database Applications appeared first on Actian.


Read More
Author: Teresa Wingfield

Legacy Transactional Databases: Oh, What a Tangled Web

Database modernization is increasingly needed for digital transformation, but itā€™s hard work. There are many reasons why; this blog will drill down on one of the main ones: legacy entanglements. Often, organizations have integrated legacy databases with business processes, the applications they run (and their dependencies), and systems such as enterprise resource planning, customer relationship management, supply chain management, human resource management, point-of-sales systems, and e-commerce. Plus, thereā€™s middleware and integration, identify and access management, backup and recovery, replication, and other technology integrations to consider.

Your Five-Step Plan for Untangling Legacy Dependencies

So, how do you safely untangle legacy databases for database modernization in the cloud? Hereā€™s a list of steps that you can take for greater success and a less disruptive transition.

1. Understand and Document Dependencies and Underlying Technologies

There are many activities involved in identifying legacy dependencies. A good start is to review any available database documentation for integrations, including mentions of third-party libraries, frameworks, and services that the database relies on. Code review, with the help of dependency management tools, can identify dependencies within the legacy codebase. Developers, architects, database administrators, and other team members may be able to provide additional insights into legacy dependencies.

2. Prioritize Dependencies

Prioritization is important since you canā€™t do everything at once. Prioritizing legacy dependencies involves assessing the importance, impact, and risk associated with each dependency in the context of a migration or modernization effort. Higher-priority dependencies should incorporate those that are critical for the database to function and that have the highest business value. When assessing business impact, include how dependencies affect revenue generation and critical business operations.

Also, consider risks, interdependencies, and migration complexity when prioritizing dependencies. For example, outdated technologies can threaten database security and stability. Database dependencies can have significant ripple effects throughout an organizationā€™s systems and processes that require careful consideration. For example, altering a database schema during a migration can lead to application errors, malfunctions, or performance issues. Finally, some dependencies are easier to migrate or replace than others and this might impact its importance or urgency during migration.

3. Take a Phased Approach

A phased migration approach to database modernization that includes preparation, planning, execution, operation, and optimization helps organizations manage complexity, minimize risks, and ensure continuity of operations throughout the migration process. Upfront preparation and planning are necessary to ensure success. It may be beneficial to start small with low-risk or non-critical components to validate procedures and identify issues. The operating phase involves managing workloads, including performance monitoring, resource management, security, and compliance. Itā€™s critical to optimize activities and address concerns in these areas.

4. Reduce Risks

To reduce the risks associated with dependencies, consider approaches that run legacy and modern systems in parallel and use staging environments for testing. Replication offers redundancy that can help ensure business continuity. In case unexpected issues arise, always have a rollback plan to minimize disruption.

5. Breakdown Monolithic Dependencies

Lastly, donā€™t recreate the same monolithic dependencies found in your legacy database so that you can get the full benefits of digital transformation. A microservices architecture can break down the legacy database into smaller, independent components that can be developed, deployed, and scaled independently. This means that changes to one part of the database donā€™t affect other parts, reducing the risk of system-wide failures and making the database much easier to maintain and enhance.

How Actian Can Help with Database Modernization

The Ingres NeXt Readiness Assessment offers a pre-defined set of professional services tailored to your requirements. The service is designed to assist you with understanding the requirements to modernize Ingres and Application By Forms (ABF) or OpenROAD applications and to impart recommendations important to your modernization strategy formulation, planning, and implementation.

Based on the knowledge gleaned from the Ingres NeXt Readiness Assessment, Actian can assist you with your pilot and production deployment. Actian can also facilitate a training workshop should you require preliminary training.

For more information, please contact services@actian.com.

The post Legacy Transactional Databases: Oh, What a Tangled Web appeared first on Actian.


Read More
Author: Teresa Wingfield

The Evolution of AI Graph Databases: Building Strong Relations Between Data (Part One)


We live in an era in which business operations and success are based in large part on how proficiently databases are handled. This is an area in which graph databases have emerged as a transformative force, reshaping our approach to handling and analyzing datasets.Ā  Unlike the conventional structure of traditional methods of accessing databases, which [ā€¦]

The post The Evolution of AI Graph Databases: Building Strong Relations Between Data (Part One) appeared first on DATAVERSITY.


Read More
Author: Prashant Pujara

Auditing Database Access and Change
The increasing burden of complying with government and industry regulations imposes significant, time-consuming requirements on IT projects and applications. And nowhere is the pressure to comply with regulations greater than on data stored in corporate databases. Organizations must be hyper-vigilant as they implement controls to protect and monitor their data. One of the more useful [ā€¦]


Read More
Author: Craig Mullins

RSS
YouTube
LinkedIn
Share