Search for:
User-Friendly External Smartblobs Using a Shadow Directory

I am very excited about the HCL InformixÂŽ 15 external smartblob feature.

If you are not familiar with them, external smartblobs allow the user to store actual Binary Large Object (blob) and Character Large Object (clob) data external to the database. Metadata about that external storage is maintained by the database.

Notes: This article does NOT discuss details of the smartblobs feature itself, but rather proposes a solution to make the functionality more user-friendly. For details on feature behavior, setup, and new functions, see the documentation.

At the writing of this blog, v15.0 does not have the ifx_lo_path function defined, as required below.  This has been reported to engineering.  The workaround is to create it yourself with the following command:

create dba function ifx_lo_path(blob)
  returns lvarchar
  external name '(sq_lo_path)'
  language C;

This article also does not discuss details of client programming required to INSERT blobs and clobs into the database.

The external smartblob feature was built for two main reasons:

1. Backup size

Storing blobs in the database itself can cause the database to become extremely large. As such, performing backups on the database takes an inordinate amount of time, and 0 level backups can be impossible. Offloading the actual blob contents to an external file system can lessen the HCL Informix backup burden by putting the blob data somewhere else. The database still governs the storage of, and access to, the blob, but the physical blob is housed elsewhere/externally.

2. Easy access to blobs

Users would like easy access to blob data, with familiar tools, without having to go through the database. 

Using External Smartblobs in HCL Informix 15

HCL Informix 15 introduces external smartblobs. When you define an external smartblob space, you specify the external directory location (outside the database) where you would like the actual blob data to be stored. Then you assign blob column(s) to that external smartblob space when you CREATE TABLE. When a row is INSERTed, HCL Informix stores the blob data in the defined directory using an internal identifier for the filename.

Here’s an example of a customer forms table: custforms (denormalized and hardcoded for simplicity). My external sbspace directory is /home/informix/blog/resources/esbsp_dir1.

CREATE TABLE custforms(formid SERIAL, company CHAR(20), year INT, lname CHAR(20), 
formname CHAR(50), form CLOB) PUT form IN (esbsp);

Here, I INSERT a 2023 TaxForm123 document from a Java program for a woman named Sanchez, who works for Actian:

try(PreparedStatement p = c.prepareStatement("INSERT INTO custforms 
(company, year, lname, formname, form) values(?,?,?,?,?)");

FileInputStream is = new FileInputStream("file.xml")) {
p.setString(1, "Actian");
p.setString(2, "2023");
p.setString(3, "Sanchez");
p.setString(4, "TaxForm123");
p.setBinaryStream(5, is);
p.executeUpdate();
}

After I INSERT this row, my external directory and file would look like this:

[informix@schma01-rhvm03 resources]$ pwd
/home/informix/blog/resources
[informix@schma01-rhvm03 resources]$ ls -l esbsp*
-rw-rw---- 1 informix informix 10240000 Oct 17 13:22 esbsp_chunk1

esbsp_dir1:
total 0
drwxrwx--- 2 informix informix 41 Oct 17 13:19 IFMXSB0
[informix@schma01-rhvm03 resources]$ ls esbsp_dir1/IFMXSB0
LO[2,2,1(0x102),1729188125]

Where LO[2,2,1(0x102),1729188125]is an actual file that contains the data that I could access directly. The problem is that if I want to directly access this file for Ms. Sanchez, I would first have to figure out that this file belongs to her and is the tax document I want. It’s very cryptic!

A User-Friendly Smartblob Solution

When talking to Informix customers, they love the new external smartblobs feature but wish it could be a little more user-friendly.

As in the above example, instead of putting Sanchez’s 2023 TaxForm123 into a general directory called IFMXSB0 in a file called LO[2,2,1(0x102),1729188125, which together are meaningless to an end-user, wouldn’t it be nice if the file was located in an intuitive place like /home/forms/Actian/2024/TaxForm123/Sanchez.xml or something similar…something meaningful…how YOU want it organized?

Having HCL Informix automatically do this is a little easier said than done, primarily because the database would not intuitively know how any one customer would want to organize their blobs. What exact directory substructure? From what column or columns do I form the file names? What order? All use cases would be different.

Leveraging a User-Friendly Shadow Directory

The following solution shows how you can create your own user-friendly logical locations for your external smartblobs by automatically maintaining a lightweight shadow directory structure to correspond to actual storage locations. The solution uses a very simple system of triggers and stored procedures to do this.

Note: Examples here are shown on Linux, but other UNIX flavors should work also.

How to Set Up in 4 Steps

For each smartblob column in question

STEP 1: Decide how you want to organize access to your files.

Decide what you want the base of your shadow directory to be and create it. In my case for this blog, it is: /home/informix/blog/resources/user-friendly. You could probably implement this solution without a set base directory (as seen in the examples), but that may not be a good idea because users would unknowingly start creating directories everywhere.

STEP 2: Create a create_link stored procedure and corresponding trigger for INSERTs.

This procedure makes sure that the desired data-driven subdirectory structure exists from the base (mkdir -p), then forms a user-friendly logical link to the Informix smartblob file.    You must pass all the columns to this procedure from which you want to form the directory structure and filename from the trigger.

CREATE PROCEDURE

CREATE PROCEDURE create_link (p_formid INT, p_company CHAR(20), p_year INT,
p_lname CHAR(20), p_formname CHAR(50))
DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500);
DEFINE v_ifmxname CHAR(500);
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';
-- make sure directory tree exists
LET v_oscommand = 'mkdir -p ' || TRIM(v_basedir) || '/' || TRIM(p_company) || '/' || 
TO_CHAR(p_year);
SYSTEM v_oscommand; 

-- form full link name 
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_company) || '/' || TO_CHAR(p_year) 
|| '/' || TRIM(p_lname) || '.' || TRIM(p_formname) || '.' || TO_CHAR(p_formid);

-- get the actual location 
SELECT IFX_LO_PATH(form::LVARCHAR) INTO v_ifmxname FROM custforms WHERE formid = p_formid; 

-- create the os link 
LET v_oscommand = 'ln -s -f ' || '''' || TRIM(v_ifmxname) || '''' || ' ' || v_custlinkname; 
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER ins_tr INSERT ON custforms REFERENCING new AS post
FOR EACH ROW(EXECUTE PROCEDURE create_link (post.formid, post.company,
post.year, post.lname, post.formname));

STEP 3: Create a delete_link stored procedure and corresponding trigger for DELETEs.

This procedure will delete the shadow directory link if the row is deleted.

CREATE PROCEDURE

CREATE PROCEDURE delete_link (p_formid INT, p_company CHAR(20), p_year INT,
p_lname CHAR(20), p_formname CHAR(50))
DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500); 
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';
-- form full link name
LET v_custlinkname = TRIM(v_basedir) || '/' ||
TRIM(p_company) || '/' || TO_CHAR(p_year) || '/' || TRIM(p_lname) || '.'
|| TRIM(p_formname) || '.' || TO_CHAR(p_formid);
-- remove the link
LET v_oscommand = 'rm -f -d ' || v_custlinkname;
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER del_tr DELETE ON custforms REFERENCING old AS pre FOR EACH ROW
(EXECUTE PROCEDURE delete_link (pre.formid, pre.company, pre.year, pre.lname, pre.formname));

STEP 4: Create a change_link stored procedure and corresponding trigger for UPDATEs, if desired.   In my example, Ms. Sanchez might get married to Mr. Simon and an UPDATE to her last name in the database occurs. I may then want to change all my user-friendly names from Sanchez to Simon.  This procedure deletes the old link and creates a new one.

Notice the update trigger only must fire on the columns that form your directory structure and filenames.

CREATE PROCEDURE

CREATE PROCEDURE change_link (p_formid INT, p_pre_company CHAR(20), 
p_pre_year INT, p_pre_lname CHAR(20), p_pre_formname CHAR(50), p_post_company CHAR(20), 
p_post_year INT, p_post_lname CHAR(20), p_post_formname CHAR(50))

DEFINE v_oscommand CHAR(500);
DEFINE v_custlinkname CHAR(500);
DEFINE v_ifmxname CHAR(500);
DEFINE v_basedir CHAR(100);
-- set the base directory
LET v_basedir = '/home/informix/blog/resources/user-friendly';

-- get rid of old

-- form old full link name
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_pre_company) || '/' || 
TO_CHAR(p_pre_year) || '/' || TRIM(p_pre_lname) || '.' || TRIM(p_pre_formname) || '.' 
|| TO_CHAR(p_formid) ;

-- remove the link and empty directories
LET v_oscommand = 'rm -f -d ' || v_custlinkname;
SYSTEM v_oscommand;

-- form the new
-- make sure directory tree exists
LET v_oscommand = 'mkdir -p ' || TRIM(v_basedir) || '/' || TRIM(p_post_company) || '/' || 
TO_CHAR(p_post_year);
SYSTEM v_oscommand;

-- form full link name
LET v_custlinkname = TRIM(v_basedir) || '/' || TRIM(p_post_company) || '/' || 
TO_CHAR(p_post_year) || '/' || TRIM(p_post_lname) || '.' || TRIM(p_post_formname) 
|| '.' || TO_CHAR(p_formid) ;

-- get the actual location
-- this is the same as before as id has not changed
SELECT IFX_LO_PATH(form::LVARCHAR) INTO v_ifmxname FROM custforms WHERE formid = p_formid;

-- create the os link
LET v_oscommand = 'ln -s -f ' || '''' || TRIM(v_ifmxname) || '''' || ' ' || v_custlinkname;
SYSTEM v_oscommand;

END PROCEDURE

CREATE TRIGGER

CREATE TRIGGER upd_tr UPDATE OF formid, company, year, lname, formname ON custforms
REFERENCING OLD AS pre NEW as post

FOR EACH ROW(EXECUTE PROCEDURE change_link (pre.formid, pre.company, pre.year, pre.lname, 
pre.formname, post.company, post.year, post.lname, post.formname));

Results Example

Back to our example.

With this infrastructure in place, now in addition to the Informix-named file being in place, I would have these user-friendly links on my file system that I can easily locate and identify.

INSERT

[informix@schma01-rhvm03 2023]$ pwd
/home/informix/blog/resources/user-friendly/Actian/2023
[informix@schma01-rhvm03 2023]
$ ls Sanchez.TaxForm123.2

If I do an ls -l, you will see that it is a link to the Informix blob file.

[informix@schma01-rhvm03 2023]$ ls -l
total 0
lrwxrwxrwx 1 informix informix 76 Oct 17 14:20 Sanchez.TaxForm123.2 -> 
/home/informix/blog/resources/esbsp_dir1/IFMXSB0/LO[2,2,1(0x102),1729188126]

UPDATE

If I then update her last name with UPDATE custforms SET lname = ‘Simon’ where formid=2,my file system now looks like this:

[informix@schma01-rhvm03 2023]$ ls -l
lrwxrwxrwx 1 informix informix 76 Oct 17 14:25 Simon.TaxForm123.2 -> 
/home/informix/blog/resources/esbsp_dir1/IFMXSB0/LO[2,2,1(0x102),1729188126]

DELETE

If I then go and DELETE this form with DELETE FROM custforms where formid=2, my directory structure looks like this:

[informix@schma01-rhvm03 2023]$ pwd
/home/informix/blog/resources/user-friendly/Actian/2023
[informix@schma01-rhvm03 2023]$ ls
[informix@schma01-rhvm03 2023]$

We Welcome Your Feedback

Please enjoy the new HCL Informix15 external smartblob feature.

I hope this idea can make external smartblobs easier for you to use. If you have any feedback on the idea, especially on enhancements or experience in production, please feel free to contact me at mary.schulte@hcl-software.com. I look forward to hearing from you!

Find out more about the launch of HCL Informix 15.

Notes

1. Shadow directory permissions. In creating this example, I did not explore directory and file permissions, but rather just used general permissions settings on my sandbox server. Likely, you will want to control permissions to avoid some of the anomalies I discuss below.

2. Manual blob file delete. With external smartblobs, if permissions are not controlled, it is possible that a user might somehow delete the physical smartblob file itself from its directory. HCL Informix, itself, cannot control this from happening. In the event it does happen, HCL Informix does NOT delete the corresponding row; the blob file will just be missing. There may be aspects to links that can automatically handle this, but I have not investigated them for this blog.

3. Link deletion in the shadow directory. If permissions are not controlled, it is possible that a user might delete a logical link formed by this infrastructure. This solution does not detect this. If this is an issue, I would suggest a periodic maintenance job that cross references the shadow directory links to blob files to detect missing links. For those blobs with missing links, write a database program to look up the row’s location with the IFX_LO_PATH function, and reform the missing link.

4. Unique identifiers. I highly recommend using unique identifiers in this solution. In this simple example, I used formid. You don’t want to clutter things up, of course, but depending on how you structure your shadow directories and filenames, you may need to include more unique identifiers to avoid directory and link names duplication.

5. Empty directories. I did not investigate if there are options to rm in the delete stored procedure to clean up empty directories that might remain if a last item is deleted.

6. Production overhead. It is known that excessive triggers and stored procedures can add overhead to a production environment. For this blog, it is assumed that OLTP activity on blobs is not excessive, therefore production overhead should not be an issue. This being said, this solution has NOT been tested at scale.

7. NULL values. Make sure to consider the presence and impact of NULL values in columns used in this solution. For simplicity, I did not handle them here.

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

 

The post User-Friendly External Smartblobs Using a Shadow Directory appeared first on Actian.


Read More
Author: Mary Schulte

AI Predictions for 2025: Embracing the Future of Human and Machine Collaboration


Predictions are funny things. They often seem like a bold gamble, almost like trying to peer into the future with the confidence we inherently lack as humans. Technology’s rapid advancement surprises even the most seasoned experts, especially when it progresses exponentially, as it often does. As physicist Albert A. Bartlett famously said, “The greatest shortcoming […]

The post AI Predictions for 2025: Embracing the Future of Human and Machine Collaboration appeared first on DATAVERSITY.


Read More
Author: Philip Miller

Data Monetization: The Holy Grail or the Road to Ruin?


Unlocking the value of data is a key focus for business leaders, especially the CIO. While in its simplest form, data can lead to better insights and decision-making, companies are pursuing an entirely different and more advanced agenda: the holy grail of data monetization. This concept involves aggregating a variety of both structured and unstructured […]

The post Data Monetization: The Holy Grail or the Road to Ruin? appeared first on DATAVERSITY.


Read More
Author: Tony Klimas

Beyond Ownership: Scaling AI with Optimized First-Party Data


Brands, publishers, MarTech vendors, and beyond recently gathered in NYC for Advertising Week and swapped ideas on the future of marketing and advertising. The overarching message from many brands was one we’ve heard before: First-party data is like gold, especially for personalization. But it takes more than “owning” the data to make it valuable. Scale and accuracy […]

The post Beyond Ownership: Scaling AI with Optimized First-Party Data appeared first on DATAVERSITY.


Read More
Author: Tara DeZao

Accelerating Innovation: Data Discovery in Manufacturing

The manufacturing industry is in the midst of a digital revolution. You’ve probably heard these buzzwords: Industry 4.0, IoT, AI, and machine learning– all terms that promise to revolutionize everything from assembly lines to customer service. Embracing this digital transformation is key in improving your competitive advantage, but new technology doesn’t come without its own challenges. Each new piece of technology needs one thing to deliver innovation: data.

Data is the fuel powering your tech engines. Without the ability to understand where your data is, whether it’s trustworthy, or who owns the datasets, even the most powerful tools can overcomplicate and confuse the best data teams. That’s where modern data discovery solutions come in. They’re like the backstage crew making sure everything runs smoothly– connecting systems, tidying up the data mess, and making sure everyone has exactly what they need, when they need it. That means faster insights, streamlined operations, and a lower total cost of ownership (TCO). In other words, data access is the key to staying ahead in today’s fast-paced, highly competitive, increasingly sensitive manufacturing market. 

The Problem

Data from all aspects of your business is siloed– whether it’s coming from sensors, legacy systems, cloud applications, suppliers or customers– trying to piece it all together is daunting, time-consuming, and just plain hard. Traditional methods are slow, cumbersome, and definitely not built for today’s needs. This fragmented approach not only slows down decision-making, but keeps you from tapping into valuable insights that could drive innovation. And in a market where speed is everything, that’s a recipe for falling behind. 

So the big question is: how can you unlock the true potential of your data?

The Solution

So how do you make data intelligence into a streamlined, efficient process? The answer lies in modern data discovery solutions– the unsung catalyst of a digital transformation motion. Rather than simply integrating data sources, data discovery solutions excel in metadata management, offering complete visibility into your company’s data ecosystem. They enable users– regardless of skill level– to locate where data resides and assess the quality and relevance of the information. By providing this detailed understanding of data context and lineage, organizations can confidently leverage accurate, trustworthy datasets, paving the way for informed decision-making and innovation, 

Key Components

Easy-to-Connect Data Sources for Metadata Management

 One of the biggest hurdles in data integration is connecting to a variety of data sources, including legacy systems, cloud applications, and IoT devices. Modern data discovery tools like Zeenea offer easy connectivity, allowing you to extract metadata from various sources seamlessly. This unified view eliminates silos and enables faster, more informed decision-making across the organization.

Advanced Metadata Management

Metadata is the backbone of effective data discovery. Advanced metadata management capabilities ensure that data is well-organized, tagged, and easily searchable. This provides a clear context for data assets, helping you understand the origin, quality, and relevance of your data. This means better data search and discoverability.

Data Discovery Knowledge Graph

A data discovery knowledge graph serves as an intelligent map of your metadata, illustrating the intricate relationship and connections between data assets. It provides users with a comprehensive view of how data points are linked across systems, offering a clear picture of data lineage– from origin to current state. The visibility into the data journey is invaluable in manufacturing, where understanding the flow of information between production data, supply chain metrics, and customer feedback is critical. By tracing the lineage of data, you can quickly assess its accuracy, relevance, and context, leading to more precise insights and informed decision-making.

Quick Access to Quality Data Through Data Marketplace

A data marketplace provides a centralized hub where you can easily search, discover, and access high-quality data. This self-service model empowers your teams to find the information they need without relying on IT, accelerating time to insight. The result? Faster product development cycles, improved process efficiency, and enhanced decision-making capabilities.

User-Friendly Interface With Natural Language Search

Modern data discovery platforms prioritize user experience with intuitive, user-friendly interfaces. Features like natural language search allow users to query data using everyday language, making it easier for non-technical users to find what they need. This democratizes access to data across the organization, fostering a culture of data-driven decision-making.

Low Total Cost of Ownership (TCO)

Traditional metadata management solutions often come with a hefty price tag due to high infrastructure costs and ongoing maintenance. In contrast, modern data discovery tools are designed to minimize TCO with automated features, cloud-based deployment, and reduced need for manual intervention. This means more efficient operations and a greater return on investment.

Benefits

By leveraging a comprehensive data discovery solution, manufacturers can achieve several key benefits:

Enhanced Innovation

With quick access to quality data, teams can identify trends and insights that drive product development and process optimization.

Faster Time to Market

Automated implementation and seamless data connectivity reduce the time required to gather and analyze data, enabling faster decision-making.

Improved Operational Efficiency

Advanced metadata management and knowledge graphs help streamline data governance, ensuring that users have access to reliable, high-quality data.

Increased Competitiveness

A user-friendly data marketplace democratizes data access, empowering teams to make data-driven decisions and stay ahead of industry trends.

Cost Savings

With low TCO and reduced dependency on manual processes, manufacturers can maximize their resources and allocate budgets towards strategic initiatives.

Data is more than just a resource—it’s a catalyst for innovation. By embracing advanced metadata management and data discovery solutions, you can find, trust, and access data. This not only accelerates time to market but also drives operational efficiency and boosts competitiveness. With powerful features like API-led automation, a data discovery knowledge graph, and an intuitive data marketplace, you’ll be well-equipped to navigate the challenges of Industry 4.0 and beyond.

Call to Action

Ready to accelerate your innovation journey? Explore how Actian Zeenea can transform your manufacturing processes and give you a competitive edge.

Learn more about how our advanced data discovery solutions can help you unlock the full potential of your data. Sign up for a live product demo and Q&A. 

 

The post Accelerating Innovation: Data Discovery in Manufacturing appeared first on Actian.


Read More
Author: Kasey Nolan

Mind the Gap: Architecting Santa’s List – The Naughty-Nice Database


You never know what’s going to happen when you click on a LinkedIn job posting button. I’m always on the lookout for interesting and impactful projects, and one in particular caught my attention: “Far North Enterprises, a global fabrication and distribution establishment, is looking to modernize a very old data environment.” I clicked the button […]

The post Mind the Gap: Architecting Santa’s List – The Naughty-Nice Database appeared first on DATAVERSITY.


Read More
Author: Mark Cooper

From Silos to Synergy: Data Discovery for Manufacturing

Introduction

There is an urgent reality that many manufacturing leaders are facing, and that’s data silos. Valuable information remains locked within departmental systems, hindering your ability to make strategic, well-informed decisions. A data catalog and enterprise data marketplace solution provides a comprehensive, integrated view of your organization’s data, breaking down silos and enabling true collaboration. 

The Problem: Data Silos Impede Visibility

In your organization, each department maintains its own critical datasets– finance compiles detailed financial reports, sales leverages CRM data, marketing analyzes campaign performance, and operations tracks supply chain metrics. But here’s the challenge: how confident are you that you even know what data is available, who owns it, or if it’s quality?

The issue goes beyond traditional data silos. It’s not just that the data is isolated– it’s that your teams are unaware of what data even exists. This lack of visibility creates a blind spot. Without a clear understanding of your company’s data landscape, you face inefficiencies, inconsistent analysis, and missed opportunities. Departments and up duplicating work, using outdated or unreliable data, and making decisions based on incomplete information.

The absence of a unified approach to data discovery and cataloging means that even if the data is technically accessible, it remains hidden in plain sight, trapped in disparate systems without any context or clarity. Without a comprehensive search engine for your data, your organization will struggle to:

  • Identify data sources: You can’t leverage data if you don’t know it exists. Without visibility into all available datasets, valuable information often remains unused, limiting your ability to make fully informed decisions.
  • Access data quality: Even when you find the data, how do you know it’s accurate and up-to-date? Lack of metadata means you can’t evaluate the quality or relevance of the information, leading to analysis based on faulty data.
  • Understand data ownership: when it’s unclear who owns or manages specific datasets, you waste time tracking down information and validating its source. This confusion slows down projects and introduces unnecessary friction. 

The Solution

Now, imagine the transformative potential if your team could search for and discover all available data across your organization as easily as using a search engine. Implementing a robust metadata management strategy—including data lineage, discovery, and cataloging—bridges the gaps between disparate datasets, enabling you to understand what data exists, its quality, and how it can be used. Instead of chasing down reports or sifting through isolated systems, your teams gain an integrated view of your company’s data assets.

  • Data Lineage provides a clear map of how data flows through your systems, from its origin to its current state. It allows you to trace the journey of your data, ensuring you know where it came from, how it’s been transformed, and if it can be trusted. This transparency is crucial for verifying data quality and making accurate, data-driven decisions.
  • Data Discovery enables teams to quickly search through your company’s data landscape, finding relevant datasets without needing to know the specific source system. It’s like having a powerful search tool that surfaces all available data, complete with context about its quality and ownership, helping your team unlock valuable insights faster.
  • A Comprehensive Data Catalog serves as a central hub for all your metadata, documenting information about the datasets, their context, quality, and relationships. It acts as a single source of truth, making it easy for any team member to understand what data is available, who owns it, and how it can be used effectively.

Revolutionizing Your Operations With Metadata Management

This approach can transform the way each department operates, fostering a culture of informed decision-making and reducing inefficiencies:

  • Finance gains immediate visibility into relevant sales data, customer demand forecasts, and historical trends, allowing for more accurate budgeting and financial planning. With data lineage, your finance team can verify the source and integrity of financial metrics, ensuring compliance and minimizing risks.
  • Sales can easily search for and access up-to-date product data, customer insights, and market analysis, all without needing to navigate complex systems. A comprehensive data catalog simplifies the process of finding the most relevant datasets, enabling your sales team to tailor their pitches and close deals faster.
  • Marketing benefits from an integrated view of customer behavior, campaign performance, and product success. Using data discovery, your marketing team can identify the most impactful campaigns and refine strategies based on real-time feedback, driving greater engagement and ROI.
  • Supply Chain Leaders can trace inventory data back to its origin, gaining full visibility into shipments, supplier performance, and potential disruptions. With data lineage, they understand the data’s history and quality, allowing for proactive adjustments and optimized procurement.
  • Manufacturing Managers have access to a clear, unified view of production data, demand forecasts, and operational metrics. The data catalog offers a streamlined way to integrate insights from across the company, enabling better decision-making in scheduling, resource allocation, and quality management.
  • Operations gains a comprehensive understanding of the entire production workflow, from raw materials to delivery. Data discovery and lineage provide the necessary context for making quick adjustments, ensuring seamless production and minimizing delays.

This strategy isn’t about collecting more data—it’s about creating a clearer, more reliable picture of your entire business. By investing in a data catalog, you turn fragmented insights into a cohesive, navigable map that guides your strategic decisions with clarity and confidence. It’s the difference between flying blind and having a comprehensive navigation system that leads you directly to success.

The Benefits: From Fragmentation to Unified Insight

When you prioritize data intelligence with a catalog as a cornerstone, your organization gains access to a powerful suite of benefits:

  1. Enhanced Decision-Making: With a unified view of all data sources, your team can make well-informed decisions based on real-time insights. Data lineage allows you to trace back the origin of key metrics, ensuring the accuracy and reliability of your analysis.
  2. Improved Collaboration Across Teams: With centralized metadata and clear data relationships, every department has access to the same information, reducing silos and fostering a culture of collaboration.
  3. Greater Efficiency and Reduced Redundancies: By eliminating duplicate efforts and streamlining data access, your teams can focus on strategic initiatives rather than time-consuming data searches.
  4. Proactive Risk Management: Full visibility into data flow and origins enables you to identify potential issues before they escalate, minimizing disruptions and maintaining smooth operations.
  5. Increased Compliance and Data Governance: Data lineage provides a transparent trail for auditing purposes, ensuring your organization meets regulatory requirements and maintains data integrity.

Conclusion

Data silos are more than just an operational inconvenience—they are a barrier to your company’s growth and innovation. By embracing data cataloging, lineage, and governance, you empower your teams to collaborate seamlessly, leverage accurate insights, and make strategic decisions with confidence. It is time to break down the barriers, integrate your metadata, and unlock the full potential of your organization’s data.

Call to Action

Are you ready to eliminate data silos and gain a unified view of your operations? Discover the power of metadata management with our comprehensive platform. Visit our website today to learn more and sign up for a live product demo and Q&A.

The post From Silos to Synergy: Data Discovery for Manufacturing appeared first on Actian.


Read More
Author: Kasey Nolan

5 Data Management Tool and Technology Trends to Watch in 2025


The market surrounding data management tools and technologies is quite mature. After all, the typical business has been making extensive use of data to help streamline its operations and decision-making for years, and many companies have long had data management tools in place. But that doesn’t mean that little is happening in the world of […]

The post 5 Data Management Tool and Technology Trends to Watch in 2025 appeared first on DATAVERSITY.


Read More
Author: Matheus Dellagnelo

How to Foster a Cross-Organizational Approach to Data Initiatives


In today’s business landscape, data reigns supreme. It is the cornerstone of effective decision-making, fuels innovation, and drives organizational success. However, despite its immense potential, many organizations struggle to harness the full power of their data due to a fundamental disconnect between IT and business teams. This division not only impedes progress but also undermines […]

The post How to Foster a Cross-Organizational Approach to Data Initiatives appeared first on DATAVERSITY.


Read More
Author: Abhas Ricky

Data Insights Ensure Quality Data and Confident Decisions
Every business (large or small) creates and depends upon data. One hundred years ago, businesses looked to leaders and experts to strategize and to create operational goals. Decisions were based on opinion, guesswork, and a complicated mixture of notes and records reflecting historical results that may or may not be relevant to the future.  Today, […]


Read More
Author: Kartik Patel

Securing Your Data With Actian Vector

The need for securing data from unauthorized access is not new. It has been required by laws for handling personally identiable information (PII) for quite a while. But the increasing use of data services in the cloud for all kinds of proprietary data that is not PII now makes data security an important part of most data strategies.

This is the start of a series of blog posts that take a detailed look at how data security can be ensured with Actian Vector. The first post explains the basic concept of encryption at rest and how Actian Vector’s Database Encryption functionality implements it.

Understanding Encryption at Rest

Encryption at rest refers to encryption of data at rest, which means data that is persisted, usually on disk or in cloud storage. This encryption can be used in a database system that is mainly user data in tables and indexes, but also includes the metadata describing the organization of the user data. The main purpose of encryption at rest is to secure the persisted data from unauthorized direct access on disk or in cloud storage, that is without a connection to the database system.

The encryption can be transparent to the database applications. In this case, encryption and decryption is managed by the administrator, usually at the level of databases. The application then does not need to be aware of the encryption. It connects to the database to access and work with the data as if there is no encryption at all. In Actian Vector, this type of encryption at rest is called database encryption.

Encryption at the application level, on the other hand, requires the application to handle the encryption and decryption. Often this means that the user of the application has to provide an encryption key for both, the encryption (e.g. when data is inserted) and the decryption (e.g. when data is selected). While more complicated, it provides more control to the application and the user.

For example, encryption can be applied more fine grained to specific tables, columns in tables, or even individual record values in table columns. It may be possible to use individual encryption keys for different data values. Thus, users can encrypt their private data with their own encryption key and be sure that without having this encryption key, no other user can see the data in clear text. In Actian Vector, encryption at the application level is referred to as function-based encryption.

Using Database Encryption in Actian Vector

In Actian Vector, the encryption that is transparent to the application works at the scope of a database and therefore is called database encryption. Whether a database is encrypted or not is determined with the creation of the database and cannot be changed later. When a database is created with database encryption, all the persisted data in tables and indexes, as well as the metadata for the database, is encrypted.

The encryption method is 256-bit AES, which requires a 32 byte symmetric encryption key. Symmetric means that the same key is used to encrypt and decrypt the data. This key is individually generated for each encrypted database and is called a database (encryption) key.

To have the database key available, it is stored in an internal system le of the database server, where it is protected by a passphrase. This passphrase is provided by the user when creating the database. However, the database key is not used to directly encrypt the user data. Instead, it is used to encrypt, i.e. protect, yet another set of encryption keys that in turn are used to encrypt the user data in the tables and indexes. This set of encryption keys is called table (encryption) keys.

Once the database is created, the administrator can use the chosen passphrase to “lock” the database. When the database is locked, the encrypted data cannot be accessed. Likewise, the administrator also uses the passphrase to “unlock” a locked database and thus re-enable access to the encrypted data. When the database is unlocked, the administrator can change the passphrase. If desired, it is also possible to rotate the database key when changing the passphrase.

The rotation of the database key is optional, because it means that the whole container of the table keys needs to be decrypted with the old database key to then re-encrypt it with the new database key. Because this container of the table keys also contains other metadata, it can be quite large and thus the rotation of the database key can become a slow and computationally expensive operation. Database key rotation therefore is only recommended if there is a reasonable suspicion that the database key was compromised. Most of the time, changing only the passphrase should be sufficient. And it is done quickly.

With Actian Vector it is also possible to rotate the table encryption keys. This is done independently from changing the passphrase and the database key, and can be performed on a complete database as well as on individual tables. For each key that is rotated, the data must be decrypted with the old key and re-encrypted with the new key. In this case, we are dealing with the user data in tables and indexes. If this data is very large, the key rotation can be very costly and time consuming. This is especially true when rotating all table keys of a database.

A typical workflow of using database encryption in Actian Vector:

  • Create a database with encryption:
      1. createdb -encrypt <database_name>

This command prompts the user twice for the passphrase and then creates the database with encryption. The new database remains unlocked, i.e. it is readily accessible, until it is explicitly locked or until shutdown of the database system.

It is important that the creator of the database remembers the provided passphrase because it is needed to unlock the database and make it accessible, e.g. after a restart of the database system.

  • Lock the encrypted database:
      1. Connect to the unlocked database with the Terminal Monitor:
        sql <database_name>
      2. SQL to lock the database:
        DISABLE PASSPHRASE '<user supplied passphrase>'; g

The SQL statement locks the database. New connect attempts to the database are rejected with a corresponding error. Sessions that connected previously can still access the data until they disconnect.

To make the database lock also immediately effective for already connected sessions, additionally issue the following SQL statement:

      1. CALL X100(TERMINATE); g
  • Unlock the encrypted database:
      1. Connect to the locked database with the Terminal Monitor and option “-no_x100”:
        sql -no_x100 <database_name>
      2. SQL to unlock the database:
        ENABLE PASSPHRASE '<user supplied passphrase>'; g

The connection with the “-no_x100” option connects without access to the warehouse data, but allows the administrative SQL statement to unlock the database.

  • Change the passphrase for the encrypted database:
      1. Connect to the unlocked database with the Terminal Monitor:
        sql <database_name>
      2. SQL to change the passphrase:
        ALTER PASSPHRASE '<old user supplied passphrase>' TO
        '<new passphrase>'; g

Again, it is important that the administrator remembers the new passphrase.

After changing the passphrase for an encrypted database, it is recommended to perform a new database backup (a.k.a. “database checkpoint”) to ensure continued full database recoverability.

  • When the database is no longer needed, destroy it:
      1. destroydb <database_name>

Note that the passphrase of the encrypted database is not needed to destroy it. The command can only be performed by users with the proper privileges, i.e. the database owner and administrators.

This first blog post in the database security series explained the concept of encryption at rest and how transparent encryption — in Actian Vector called Database Encryption — is used.

The next blog post in this series will take a look at function-based encryption in Actian Vector.

The post Securing Your Data With Actian Vector appeared first on Actian.


Read More
Author: Martin Fuerderer

Synthetic Data Generation: Addressing Data Scarcity and Bias in ML Models


There is no doubt that machine learning (ML) is transforming industries across the board, but its effectiveness depends on the data it’s trained on. The ML models traditionally rely on real-world datasets to power the recommendation algorithms, image analysis, chatbots, and other innovative applications that make it so transformative.  However, using actual data creates two significant challenges […]

The post Synthetic Data Generation: Addressing Data Scarcity and Bias in ML Models appeared first on DATAVERSITY.


Read More
Author: Anshu Raj

5 Reasons to Invest in a Next-Gen Data Catalog

Organizations across every vertical face numerous challenges managing their data effectively and with full transparency. That’s at least partially due to data often being siloed across multiple systems or departments, making it difficult for employees to find, trust, and unlock the value of their company’s data assets.

Enter the Actian Zeenea Data Discovery Platform. This data intelligence solution is designed to address data issues by empowering everyone in an organization to easily find and trust the data they need to drive better decision-making, streamline operations, and ensure compliance with regulatory standards.

The Zeenea platform serves as a centralized data catalog and an enterprise data marketplace. By improving data visibility, access, and governance, it provides a scalable and efficient framework for businesses to leverage their data assets. The powerful platform helps organizations explore new and sustainable use cases, including these five:

1. Overcome Data Silo and Complexity Challenges

Data professionals are well familiar with the struggles of working in environments where data is fragmented across departments and systems. This leads to data silos that restrict access to critical information, which ends up creating barriers to fully optimizing data.

Another downside to having barriers to data accessibility is that users spend significant time locating data instead of analyzing it, resulting in inefficiencies across business processes. The Zeenea platform addresses accessibility issues by providing a centralized, searchable repository of all data assets.

The repository is enriched with metadata—such as data definitions, ownership, and quality metrics—that gives context and meaning to the organization’s data. Both technical and non-technical users can quickly find and understand the data they need, either by searching for specific terms, filtering by criteria, or through personalized recommendations. This allows anyone who needs data to quickly and easily find what they need without requiring IT skills or relying on another team for assistance.

For example, marketing analysts looking for customer segmentation data for a new campaign can quickly locate relevant datasets in the Zeenea platform. Whether analysts know exactly what they’re searching for or are browsing through the data catalog, the platform provides insights into each dataset’s source, quality, and usage history.

Based on this information, analysts can decide whether to request access to the actual data or consult the data owner to fix any quality issues. This speeds up the data usage process and ensures that decision-makers have access to the best available data relevant for the campaign.

2. Solve the Issue of Limited Data Access for Business Users

In many organizations, data access is often limited to technical teams such as IT or data engineering. Being dependent on specialty or advanced skills creates bottlenecks because business users must request data from other teams. This reliance on IT or engineering departments leads to delayed insights and increases the workload on technical teams that may already be stretched thin.

The Zeenea platform helps by democratizing data access by enabling non-technical users to explore and “shop” for data in a self-service environment. With Zeenea’s Enterprise Data Marketplace, business users can easily discover, request, and use data that has been curated and approved by data governance teams. This self-service model reduces the reliance on IT and data specialists, empowering all employees across the organization to make faster, data-driven decisions.

Barrier-free data access can help all users and departments. For instance, sales managers preparing for a strategy meeting can use the Enterprise Data Marketplace to access customer reports and visualizations—without needing to involve the data engineering team.

By using the Zeenea platform, sales managers can pull data from various departments, such as finance, sales, or marketing, to create a comprehensive view of customer behavior. This allows the managers to identify opportunities for improved engagement as well as cross-sell and upsell opportunities.

3. Gain Visibility Into Data Origins and Compliance Requirements

As organizations strive to meet stringent and regulatory requirements that seem to be constantly changing, having visibility into both data origins and data transformations becomes essential. Understanding how data has been sourced, modified, and managed is crucial for compliance and auditing processes. However, without proper tracking systems, tracing this information accurately can be extremely difficult.

This is another area where the Zeenea platform can help. It provides detailed data lineage tracking, allowing users to trace the entire lifecycle of a dataset. From data’s origin to its transformation and usage, the platform offers a visual map of data flows, making it easier to troubleshoot errors, detect anomalies, and verify the accuracy of reports.

With this capability, organizations can present clear audit trails to demonstrate compliance with regulatory standards. A common use case is in the financial sector. A bank facing a regulatory audit can leverage Zeenea’s data lineage feature to show auditors exactly how financial data has been handled.

By comprehensively tracing each dataset, the bank can easily demonstrate compliance with industry regulations. Plus, having visibility into data reduces the complexity of the audit process and builds trust in data management practices.

4. Provide Ongoing Data Governance

Managing data governance in compliance with internal policies and external regulations is another top priority for organizations. With laws such as GDPR and HIPAA that have strict penalties, companies must ensure that sensitive data is handled securely and data usage is properly tracked.

The Zeenea platform delivers capabilities to meet this challenge head-on. It enables organizations to define and enforce governance rules across their data assets, ensuring that sensitive information is securely managed. Audit trail, access control, and data lineage features help organizations comply with regulatory requirements. These features also play a key role in ensuring data is properly cataloged and monitored.

Organizations in industries like healthcare that handle highly sensitive information can benefit from the Zeenea platform. The platform can help companies, like those in healthcare, manage access controls, encryption, and data monitoring. This ensures compliance with HIPAA and other regulations while safeguarding patient privacy. Additionally, the platform streamlines internal governance practices, ensuring that all data users follow established guidelines for data security.

5. Build a Data-Driven Organization

The Actian Zeenea Data Discovery Platform offers a comprehensive solution to solve modern data management challenges. By improving data discovery, governance, and access, the Zeenea platform removes barriers to data usage, making it easier for organizations to unlock the full value of their data assets.

Whether it’s giving business users self-service capabilities, streamlining compliance efforts, or supporting a data mesh approach that decentralizes data management, the platform gives individual departments the ability to manage their own data while maintaining organization-wide visibility. Additionally, the platform provides the tools and infrastructure needed to thrive in today’s data-driven world.

Experience a Live Demo

Organizations looking to improve their data outcomes should consider the Zeenea platform. By creating a single source of truth for data across the enterprise, the solution enables faster insights, smarter decisions, and stronger compliance—all key drivers of business success in the digital age. Find out more by joining a live product demo.

The post 5 Reasons to Invest in a Next-Gen Data Catalog appeared first on Actian.


Read More
Author: Dee Radh

Book of the Month: “AI Governance Comprehensive”


Welcome to December 2024’s “Book of the Month” column. This month, we’re featuring “AI Governance Comprehensive: Tools, Vendors, Controls, and Regulations” by Sunil Soares, available for free download on the YourDataConnect (YDC) website.  This book offers readers a strong foundation in AI governance. While the emergence of generative AI (GenAI) has brought AI governance to […]

The post Book of the Month: “AI Governance Comprehensive” appeared first on DATAVERSITY.


Read More
Author: Mark Horseman

Technical and Strategic Best Practices for Building Robust Data Platforms


In the AI era, organizations are eager to harness innovation and create value through high-quality, relevant data. Gartner, however, projects that 80% of data governance initiatives will fail by 2027. This statistic underscores the urgent need for robust data platforms and governance frameworks. A successful data strategy outlines best practices and establishes a clear vision for data architecture, […]

The post Technical and Strategic Best Practices for Building Robust Data Platforms appeared first on DATAVERSITY.


Read More
Author: Alok Abhishek

The Rise of Cloud Repatriation in Storage Solutions


In recent years, a large number of businesses have jumped on the wave and moved their applications and data to the cloud, with nine out of 10 IT professionals considering the cloud a cornerstone of their digital strategy. While the cloud can offer flexibility, the reality is managing cloud expenses and resulting security liabilities have become significantly […]

The post The Rise of Cloud Repatriation in Storage Solutions appeared first on DATAVERSITY.


Read More
Author: Roger Brulotte

Data Speaks for Itself: Data Validation – Data Accuracy Imposter or Assistant?
In my last article, “The Shift from Syntactic to Semantic Data Curation and What It Means for Data Quality” published in the August 2024 issue of this newsletter, I argued how the adoption of generative AI will change the focus and scope of data quality management (DQM). Because data quality is measured in the degree […]


Read More
Author: Dr. John Talburt

Data Errors in Financial Services: Addressing the Real Cost of Poor Data Quality
Data quality issues continue to plague financial services organizations, resulting in costly fines, operational inefficiencies, and damage to reputations. Even industry leaders like Charles Schwab and Citibank have been severely impacted by poor data management, revealing the urgent need for more effective data quality processes across the sector.  Key Examples of Data Quality Failures  — […]


Read More
Author: Angsuman Dutta

Actian Zen: The Market-Leading Embedded Database – Proven!

Get to Know Actian Zen

Actian Zen is a high-performance, embedded database management system designed for efficient data management in various applications, particularly IoT and edge computing. It offers several key features:

  • High Performance: Optimized for real-time data processing and quick response times.
  • Scalability: Can handle increasing data volumes and diverse endpoints.
  • Reliability: Ensures data integrity and availability even in challenging environments.
  • Security: Provides robust security features to protect sensitive data.
  • Flexibility: Supports NoSQL and SQL data access.
  • Easy Integration: Integrates with applications and devices using API.
  • Low Maintenance: Minimal administrative overhead.

With more than 13,000 customers, Actian Zen has been utilized around the world across multiple industries to capture data generated by mobile devices, IoT sensors, edge gateways and even complex machinery, giving its users a very high level of confidence in reporting performance at the edge.

Putting Zen Through its Paces: The TPCx-IoT Benchmark

Actian has enjoyed terrific success with its Zen customer base, and while in turn, its customers have greatly benefited from Zen’s strong performance. At the same time, Actian wanted to have a third-party review, comparison, and benchmark report on the performance of the product itself alongside similar market offerings in an unbiased fashion. In October 2024, Actian commissioned the McKnight Consulting Group to run a TCPx-IoT benchmark against two of its key competitors; MongoDB and MySQL.

TPCx-IoT is a benchmark developed by the Transaction Processing Performance Council (TPC) to measure the performance, scalability, and price-performance of IoT (Internet of Things) data ingestion systems. It simulates real-time data ingestion and processing from IoT devices, evaluating a system’s ability to handle large volumes of time-series data efficiently.

Key features of the TPCx-IoT benchmark include:

  • Real-World Simulation: The benchmark simulates a realistic IoT scenario with a large
    number of devices generating data.
  • Performance Metrics: It measures performance metrics like throughput (IoTps),
    latency, and price-performance.
  • Vendor-Neutral: It provides a fair and objective comparison of different IoT data
    management solutions.
  • Scalability Testing: It evaluates the system’s ability to handle increasing data volumes
    and device counts.

By using the TPCx-IoT benchmark, organizations can compare the performance of different IoT data management solutions and select the best solution for their specific needs.

Discussion of Results: Actian Zen is a Powerful IoT Engine

The results of the benchmark test showed that Actian Zen was far superior to its key competitors in two important areas: throughput and latency.

  • Throughput: Actian Zen processes data significantly faster than other offerings, reaching up to 7,902 records per second compared to MongoDB’s at 2,099. MySQL lags far behind at 162 records per second. This means that Actian Zen has a throughput capability up to 50x the competition!
    zen throughput benchmark results
  • Latency: Actian Zen consistently demonstrates the lowest latency (time taken to process each record) across various sensor configurations, displaying up to 650x lower latency than the competition. MySQL exhibits the highest latency.
    zen latency benchmark

According to the McKnight Consulting Group, “In our evaluation of various embedded databases, Actian Zen emerged as a very compelling solution for enterprise-grade IoT workloads.”

Given the importance of real-time data availability at the edge, it is critical that both throughput and latency performance is as strong as they can be, especially given the myriad use cases that exist throughout various industries where having a true, confident view of current endpoint performance is critical, be it within healthcare, logistics, transportation, or another industry. This proves why so many customers, like Global Shop Solutions and Taifun Software AG have confidence in Actian Zen and why so many other organizations are taking a closer look…

Next Steps: Get the Report!

Curious about the benchmark? Want to know more? Here’s how to get started:

  • To read the full TPCx-IoT benchmark report from the McKnight Consulting Group, click here.
  • To find out more about Zen, check out our website here.

The post Actian Zen: The Market-Leading Embedded Database – Proven! appeared first on Actian.


Read More
Author: Phil Ostroff

Spooktacular Fun: Actian Halloween Celebration

As the crisp autumn air settles in and leaves turn vibrant shades of orange and yellow, it’s that time of year again—Halloween! This year, Actian went all out to celebrate the spooky season with fun-filled Halloween activities, featuring a pumpkin carving contest, trick-or-treat bingo, and a costume contest!

Pumpkin Carving Contest

actian pumpkin carving

A highlight of our Halloween festivities was undoubtedly the pumpkin carving contest. On the morning of the event, the office was transformed into a pumpkin patch, with tables adorned with bright orange pumpkins, carving tools, and a variety of paints and decorations. The Actian teams got to work on creating their masterpieces.

The creativity displayed was remarkable, featuring everything from classic jack-o’-lanterns with cheerful grins to elaborate designs of cats, mushrooms, and eerie faces. Actian employees showcased their pumpkin artistry in the office gallery, where they could vote for their favorites. Colleagues wandered through the display, casting their votes, and Mollie Kendall emerged as the contest winner with her spooky jack-o’-lantern.

Costume Contest

actian third place costume winner as a mushroom

What office Halloween party would be complete without a costume contest? Actian employees showcased their creativity and Halloween spirit in full force! Costumes included everything from clever pop culture references to DIY masterpieces.

The competition was intense, but in the end, the winners earned well-deserved prizes and bragging rights for the year. This event successfully brought the team together, fostering laughter and celebrating the season through friendly rivalry.

First Place: Karl Schimmel, Actian customer engineer, stole the show with his hilarious Andy Reid costume, which would have made Mahomes proud.

Second Place: La’Quinn Drick Hunter wowed everyone with his impressive Luigi costume, despite not having a Mario to accompany him.

Third Place: Kasey Nolan showcased her creativity with a unique mushroom costume that secured her third place.

With such a variety of innovative costumes, the team is already looking forward to next year’s contest!

A Celebration of Team Spirit

Beyond the contests, the Halloween celebration was a fantastic opportunity for Actian teams to take a break from a very productive year. The atmosphere was filled with joy, and it was heartwarming to see everyone come together to celebrate not just Halloween, but the vibrant culture of Actian, both in person and online.

As the day drew to a close, it was clear that this Halloween celebration was more than just a fun escape from work—it was a reminder of the creativity, teamwork, and community spirit that make our company a great place to be. With smiles, full bellies from treats, and a collection of hilarious memories, we’re already looking forward to next year’s celebration!

As we move into the final mile of the year and start to ramp up on holiday planning,  we hope you enjoyed a peek into the Actian culture we are so proud of and invite you to share your own Halloween fun!

The post Spooktacular Fun: Actian Halloween Celebration appeared first on Actian.


Read More
Author: Savannah Bruggeman

Experience Near-Unlimited Storage Capacity With HCL InformixÂŽ 15

We are thrilled to unveil HCL InformixÂŽ 15, re-imagined for organizations looking for the best way to modernize out-of-support IBMÂŽ InformixÂŽ applications. Our customers love HCL Informix because it is fast, reliable, and scalable. With the release of HCL Informix 15, we build upon this proud heritage with:

  • HCL Informix 4GL, a fourth-generation business application development environment that is designed to simplify the building of data-centric business applications, now available from Actian.
  • Larger row and page addresses that enhance scalability for large-scale data storage and processing. The new maximum capacity for a single instance is four times the estimated size of the internet.
  • External smartblobs enables the storage of binary large objects like static documents, videos and photos in an external file system to facilitate faster archiving.
  • Invisible indexes help developers and DBAs fine-tune queries by identifying which indexes are critical to specific queries by flexibly omitting them to see if they impact query runtime. 

These capabilities fortify HCL Informix’s already solid foundation to underpin the next generation of mission-critical applications. They reflect our vision for a more powerful offering that guarantees seamless business continuity and secures the longevity of your organization’s existing applications.

HCL Informix 15 now includes cloud-enabled product capabilities including a Kubernetes containerization deployment option and updated REST APIs (previously only available in HCL OneDB). For customers using HCL OneDB 1.0 and 2.0, we will adhere to the announced lifecycle dates and work with you on a recommended in-place upgrade to HCL Informix.

HCL Informix customers like Equifax are looking forward to taking advantage of these new capabilities to improve their business use cases in the near future.

“HCL Informix 15 will empower Equifax to quickly process a steady stream of payments, claims decisions, tax verifications, and more, enabling us to make data-driven decisions,” said Nick Fuller, Associate Vice President of Technology at Equifax. “Its capacity to handle vast amounts of data gives us confidence in its ability to meet our demand for rapid and efficient processing.”

Watch the Webinar >

Building an Advanced Database for Modern Enterprise Applications

4GL: Easily maintain and recompile existing 4GL applications

hcl-informix-4gl

While many IBM Informix customers are familiar with 4GL, Actian is now offering HCL Informix 4GL and HCL Informix SQL for the first time.  HCL Informix customers can leverage 4GL and ISQL to develop and debug applications, including building new menus, forms, screens, and reports with ease. 4GL reduces the time it takes to build and maintain HCL Informix applications and perform database operations like querying, updating, and managing data. Informix 4GL has a powerful report writer that enables the creation of complex reports. This capability is particularly useful for generating business reports from data stored in HCL Informix.

HCL Informix 4GL accelerates the building of applications such as:

  • Accounting Systems: Track money owed by and to the business, including invoicing, payment processing, and reports.
  • Inventory Management Systems: Manage storage locations, stock movements, and inventory audits.
  • Human Resources Systems: Maintain detailed records of employee information, performance, and benefits. 

HCL Informix 15 Server Re-Architected for Massive Storage Capacity Improvement

Larger Row and Page Addresses: Manage Large Data Sets Without Compression

Have peace of mind knowing that data volume limitations are an issue of the past with HCL Informix 15. That means improved reliability and better use of resources because organizations won’t need to compress or fragment tables.

When Informix Turbo launched in 1989, Informix architects believed 4 bytes would more than suffice for uniquely addressing each row so each page could hold a max of 255 rows and each table could have a maximum of 16.7 million pages. Now, some of the largest HCL Informix customers are pushing those original limits to their edge.  While it is possible to fragment tables to get around the max page limit, that’s an imperfect solution at scale. So we’ve expanded storage limits dramatically so max storage capacity is half a yottabyte, four times the estimated size of the internet.

large-data-sets-hcl-informix

External Smartblobs: Store Large Objects With Ease

external-smartblobs
Large objects like video or audio have traditionally been difficult for transactional databases like HCL Informix to manage because they need to be compressed to store the object efficiently, which takes time.

With HCL Informix 15, external Smartblobs enable developers to store the objects in a file system, while only keeping a record of the metadata. Instead of compressing the data, users can now create a special smartblob space to store the file metadata, with the object files stored externally.

External Smartblobs delivers benefits across a variety of use cases including:

  • Quality Assurance: Analyze how well a real-time monitoring system built on HCL Informix detects faulty products on an assembly line. Auditors can identify the product that was discarded in the metadata to find the image files of the faulty product without impacting the underlying application. 
  • Tax Authority: Tax administrators need to capture tax returns in case they need to audit a company or individual. They can store the static tax return documents with a specific ID and access them through the HCL Informix application just by using the metadata.

Invisible Indexes: Optimize Your Queries Faster

Indexes are special data structures that improve the speed of data retrieval operations on a database table. They work similarly to an index in a book, allowing the database to find and access the data faster without having to scan every row in a table. However, not every index will be used for the queries in an application. HCL Informix 15 enables users to make certain indexes invisible when running an application to help test which indexes impact queries and which ones do not for better operational efficiency.

Invisible indexes support real-world use cases such as:

  • E-commerce Platforms often deal with large volumes of transactions and queries. Invisible indexes can be used to test and optimize query performance without disrupting the shopping experience.
  • Healthcare System databases require efficient data retrieval for patient records and research. Invisible indexes can help optimize these queries without affecting the overall system.
  • Customer Relationship Management (CRM) systems handle vast amounts of customer data. Invisible indexes can be used to improve the performance of specific queries related to customer interactions and their history.

Start Your Modernization Project With HCL Informix 15

The Actian team is ready to support you as you get started on your modernization project with HCL Informix 15. 

Check out the on-demand webinar “Secure Your Future with HCL Informix® 15” to learn more about HCL Informix 15. Also, see how your peers are using HCL Informix to modernize their applications. Plus, wait until the end to hear about our limited one-time offer.

Watch the Webinar >

Additional Resources: 

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

 

The post Experience Near-Unlimited Storage Capacity With HCL InformixÂŽ 15 appeared first on Actian.


Read More
Author: Emily Taylor

The Game-Changing Data Discovery Platform for Data Democratization

In today’s ever-changing data landscape, managing, discovering, and utilizing all relevant data effectively is a critical challenge for organizations, regardless of their industry. As both data volumes and the number of sources grow, so does the complexity of organizing and accessing that information, especially when it resides across disparate systems and platforms.

The Zeenea Data Discovery Platform is designed to address these challenges head-on. It enables both technical and non-technical users to quickly and efficiently find, access, and trust enterprise data, regardless of where it’s stored.

What Sets the Zeenea Platform Apart in the Marketplace?

The Zeenea Data Discovery Platform is a comprehensive metadata management solution that streamlines data governance, simplifies data discovery, and manages vast data assets. It’s built with flexibility and ease-of-use at its core, catering to both data professionals and everyday data users who don’t have advanced IT skill sets.

One benefit that sets the Zeenea platform apart is its ability to automate processes, ensuring that organizations can keep pace with their rapidly evolving data environment without extensive manual effort. The Zeenea platform is powered by two essential applications:

  1. Zeenea Studio. This application is geared toward data professionals, including chief data officers (CDOs), data engineers, data stewards, and data governance teams. It is designed to make data documentation easy and automated, allowing teams to enrich and manage their data with precision. Users can curate data assets, ensuring they are well-organized and readily accessible.

For organizations struggling with the complexity of metadata management, Zeenea Studio simplifies the process by automating the collection and curation of data, reducing manual overhead and increasing accuracy. Data stewards, in particular, can use the platform to ensure that their organization’s data is both trustworthy and compliant with internal and external regulations.

  1. Zeenea Explorer. This application is designed for everyday users, making data exploration and discovery simple and intuitive. Whether users are a data scientist, analyst, or business stakeholder, the platform allows them to find relevant data quickly and easily. Its interface is user-friendly, ensuring that even those without a deep technical background can access the data they need to make informed decisions.

The self-service nature of Zeenea Explorer is one of its standout features. It allows business leaders and data teams to access data when they need it, streamlining workflows and accelerating decision-making across departments.

4 Key Differentiators Empower Data-Driven Organizations

Four primary features distinguish the Zeenea Data Discovery Platform from other data management solutions, making it a top choice for modern organizations:

  1. API-Based Automation. The solution is a fully API-driven platform, which means that organizations can automate their entire data cataloging process. This level of automation reduces the need for manual updates and ensures that the data catalog stays up-to-date as the data environment evolves. This automation also helps scale metadata management across complex environments with ease.
  2. Universal Connectivity. Zeenea supports a wide range of data sources, from traditional databases to cloud services. This universal connectivity makes it an incredibly versatile tool, capable of managing diverse data types across multiple platforms. Organizations with hybrid data environments can also rely on Zeenea to seamlessly discover and manage all their data assets in one place.
  3. Powerful Knowledge Graph. A comprehensive knowledge graph enables a progressive design for data discovery. As data needs grow, the knowledge graph adapts, helping uncover relationships between data points, enriching the overall understanding of data assets. This dynamic feature provides deeper insights and allows organizations to maximize the value of their data.
  4. Intuitive User Experience. The Zeenea platform is designed with simplicity in mind. It requires no training to use, making it accessible for users of all levels. Whether users are experienced data professionals or business analysts, Zeenea offers an experience that is both intuitive and powerful, allowing for quick adoption and effective use across the organization.

Modern Capabilities to Advance Data Discovery

A data discovery platform like the Zeenea solution is essential for unifying, governing, and leveraging data effectively. In addition to meeting organizations’ needs for data intelligence, the Zeenea Data Discovery Platform offers a variety of innovative capabilities that ensure data is connected, compliant, and easily accessible:

  • Business Glossary. Zeenea’s Business Glossary allows organizations to establish a consistent business language across all data consumers. This feature is crucial for ensuring that everyone in the organization is working from the same definitions and standards, fostering collaboration and transparency. Teams can easily define rules, set policies, and visualize relationships between business terms through an intuitive, automated interface.
  • Data Compliance. In an era of increased regulation, data compliance is critical. Zeenea helps organizations stay compliant with regulations by detecting personal information and providing suggestions on how to tag and manage sensitive data. This capability allows data stewards to handle compliance issues with greater autonomy, ensuring that data usage across the organization adheres to legal requirements.
  • Data Discovery. The Zeenea platform takes inspiration from marketplaces and e-commerce websites, offering smart search capabilities that allow users to find the data they need quickly and efficiently. Whether users know exactly what they are looking for or are exploring potential use cases, the platform’s data discovery capabilities provide smart recommendations and a 360-degree view of relevant data.
  • Data Governance. The platform’s data governance capabilities help drive business initiatives by ensuring that data is trusted, secure, and compliant. Zeenea’s approach to governance is collaborative and non-intrusive, adapting to the specific needs of each organization and ensuring that data governance evolves with the organization’s data landscape.
  • Data Lineage. Zeenea provides comprehensive data lineage capabilities, allowing data teams to map the entire lifecycle of data, from collection to storage and use. This context-rich view helps organizations understand their data’s origins, relationships, and evolution over time, which is critical for regulatory compliance and improved analytics.
  • Data Quality. Through its ability to connect with data quality management (DQM) solutions, the Zeenea platform provides users with data quality metrics during the discovery phase. This ensures that teams can trust the data they are working with, avoiding risks and driving better outcomes.
  • Data Shopping. Similar to an online shopping experience, Zeenea’s Enterprise Data Marketplace allows users to browse, request, and gain access to relevant datasets with ease. This intuitive data shopping experience democratizes data access across the organization, empowering users to leverage data for strategic decision-making without needing to be data experts.
  • Data Stewardship. The Zeenea platform helps data stewards manage large volumes of data by automating data documentation and enhancing metadata management. Data stewardship reduces the burden on data teams, increases productivity, and ensures that organizations can maintain high data standards without the need for extensive manual input. 

Make Data Usable and Accessible to Everyone

At its core, Zeenea is a smart data discovery platform that enables organizations to find, trust, and unlock the value of their enterprise data. By offering both technical and non-technical users the tools they need to access and understand their data, the platform empowers informed decision-making, drives productivity, and fosters collaboration.

With capabilities such as Zeenea Studio and Zeenea Explorer, organizations can maximize the value of their data while maintaining governance and compliance. Whether it’s enriching data, ensuring regulatory compliance, or democratizing access to data across the enterprise, the Zeenea platform provides a scalable, flexible solution that adapts to the evolving needs of today’s data-driven businesses. Experience it for yourself with a live tour.

The post The Game-Changing Data Discovery Platform for Data Democratization appeared first on Actian.


Read More
Author: Dee Radh

Why a Data Intelligence Platform is Business Critical

In today’s digital age, data has become the new currency. It powers decisions, strategies, and operations across industries. However, managing data effectively is far from simple. The complexity of modern data environments is a significant roadblock to driving tangible business outcomes, despite the substantial investments made in data and analytics.

The Disconnect Between Data and Business Outcomes

Many organizations invest heavily in data technologies, expecting this will lead to improved business performance. Yet a common challenge persists: Despite all the data at their disposal, companies are still struggling to convert that data into meaningful, high-value outcomes. This disconnect stems from the overwhelming complexities of data, particularly as organizations attempt to scale their data initiatives.

Scaling brings new hurdles. Companies relying on legacy systems or outdated methodologies often find themselves bogged down by complex data architectures and cumbersome workflows. As the volume of data grows, so do the complexities of managing, governing, and leveraging it effectively. Manual processes and legacy tools simply cannot keep pace with the demand for real-time insights and actionable information.

In addition, many organizations fail to modernize their approach to data and analytics governance, which is crucial for a successful digital transformation and fully optimizing data. Without proper governance, data becomes fragmented, difficult to access, and ultimately less valuable to the business. These issues lead to costly projects that either fail outright or deliver a low return on investment (ROI), causing businesses to miss critical opportunities.

Benefit From the Exponential Growth of Data

One of the most pressing challenges organizations face today is the exponential growth of data, coming from more sources than ever. Data flows from countless sources, including:

  • Internal systems
  • Cloud services
  • Customer interactions
  • IoT devices
  • Social media
  • Other sources

This data influx places immense pressure on traditional tools and methods, which are proving to be insufficient in managing, governing, and securing vast amounts of data. As data grows in volume and variety, so does the need for automation. That’s because manual processes can no longer handle the scale required to keep data accurate, secure, and accessible. Information gaps arise, and organizations miss out on valuable insights that could drive competitive advantage.

This places a growing need for a data intelligence solution. Additionally, there is increased awareness that without a streamlined, automated approach to data intelligence, organizations won’t be able to effectively manage their expanding data landscape.

The Three Critical Questions of Data Management

At the heart of data management challenges are three critical questions that every organization must address to optimize the full potential of their data:

  1. Where is my data? Data is often scattered across multiple systems, departments, and geographic regions. Without a unified view, it’s nearly impossible to leverage data effectively. Siloed data environments hinder innovation and slow down decision-making processes.
  2. Can I trust my data? Data quality remains a major issue for many organizations. In fact, many companies don’t measure the financial cost of poor data quality, which makes it difficult to determine how inadequate data is impacting the business. When data is inaccurate or incomplete, it undermines decision making and leads to inefficiencies. Trustworthy data is ultimately the foundation of trustworthy business decisions.
  3. Can I easily access the data? At many companies, access to data is often restricted due to compliance rules, security measures, or siloed systems. This lack of access prevents teams from fully leveraging the data needed to innovate and respond to market changes quickly. Data should be readily available to every person and application that needs it.

Key Capabilities of a Modern Data Intelligence Platform

Addressing data challenges requires a comprehensive solution that centralizes, verifies, and governs data efficiently. This is where a data intelligence platform that democratizes data across the organization becomes essential.

A data intelligence platform provides a unified approach to managing, governing, and leveraging data, regardless of where it’s stored. It aligns data practices with business objectives, ensuring that data is accurate, secure, and accessible when needed. It addresses the key aspects of data management:

  • Data Integration. Connecting data from various sources into a single, unified view is mandatory for making informed decisions. A data intelligence platform integrates data seamlessly, providing a holistic view of all data assets across the organization.
  • Data Quality. Maintaining data accuracy, consistency, and reliability is critical to ensuring trustworthy insights. A data intelligence platform automates data quality processes, including cleansing, monitoring, and enrichment, to ensure that data remains useful and accurate.
  • Data Governance. Effective data governance is crucial for managing data security, privacy, and compliance. A data intelligence platform helps establish and enforce data governance policies, ensuring that data is used appropriately and remains protected.
  • Analytics and Insights. A data intelligence platform enables analytics and machine learning capabilities, empowering organizations to extract valuable insights from their data. This allows for predictive and prescriptive decision making, helping businesses stay ahead of the competition.
  • Data Cataloging. Metadata management is a key component of any data intelligence platform. By cataloging data assets, a platform makes it easier for users to discover, understand, and access the data they need, even without deep technical expertise.
  • Self-Service Capabilities. Data professionals, business users, and decision makers need the ability to access and analyze data without relying on IT teams or advanced skill sets. A data intelligence platform empowers users with self-service tools, making it easier to derive insights and act on them quickly.

The Business Impact of Data Intelligence

Implementing a data intelligence platform has a direct impact on operational efficiency and business outcomes. When employees spend less time searching for and cleaning data, they can focus on using that data to drive innovation and deliver value. This operational efficiency translates directly into revenue potential.

Likewise, trust in data also leads to confidence in the decisions derived from it. With trustworthy data, organizations can move faster, capitalize on market opportunities, and make strategic pivots when necessary.

Data governance, a core component of data intelligence, also ensures compliance with privacy and security regulations, protecting sensitive data and minimizing risk. In addition, good data leads to better business outcomes. Accurate forecasts, informed decisions, and faster responses to changing market conditions all stem from having the right data at the right time.

Solving Complexity With the Right Data Intelligence Platform

Managing data in today’s digital landscape is complex, but with the right tools, organizations can overcome the challenges of scale, governance, and data quality. A data intelligence platform, like the Zeenea Data Discovery Platform, provides a comprehensive solution for integrating, managing, and leveraging data across the enterprise.

By addressing the critical questions of data management—where is data, can it be trusted, and can it be easily accessed—a data intelligence platform unlocks the full potential of an organization’s data. This allows businesses to drive operational efficiency, improve decision making, and deliver better business outcomes.

In a world where data is the new currency, investing in a data intelligence platform is business critical. To find out more, take a Zeenea Live Product Tour

The post Why a Data Intelligence Platform is Business Critical appeared first on Actian.


Read More
Author: Dee Radh

RSS
YouTube
LinkedIn
Share