Database Engines:pgdbe

Navigational Support (ISAM) Professional

The PostgreSQL ISAM emulation for Xbase++ represents a significant advancement for developers who want to migrate their dbf/ntx/cdx/ads based applications to the PostgreSQL DBMS. This technology enables seamless migration of existing ISAM code and business logic without requiring extensive rewrites.

In addition, it facilitates full migration of the existing ISAM data model to the PostgreSQL environment, Leverages 100% emulation of ISAM indexes through special "order" columns maintained by the client application, ensuring 100% index expression compatibility while maintaining data and index integrity.

For a more detailed explanation of the underlying ISAM emulating concepts see Tables/Indexes on the SQL Server.

While this universal approach provides 100% compatibility and works very well, it has its drawbacks in some areas:

- Concurrent data access with a non-Xbase++ application in real time is limited. Even keyfix mode does not always help here.

- Additional column for all index expressions leads to additional transaction and upsize workload (space and time)

- Reindexing or index creation operations on large tables (hundreds of thousands of rows) tend to become a performance bottleneck

- Many filter expressions cannot be converted to SQL syntax and hence cannot be executed

To overcome these limitations, Xbase++ 2.0 introduced with build #2169 advanced concepts and technologies which are:

SmartOrders: SmartOrders specialize the universal approach of the ISAM order emulation while providing 100% ISAM backward compatibility. This results in significant performance improvements, workload reduction, and improved SQL real-time interoperability.

RemoteFilters/LocalFilters: To ensure transparent filter management for the ISAM emulation, the concept of filters has been dramatically extended. This results in better backward compatibility for even the most complex filters, and allows transparent usage of SQL expressions for leveraging the full SQL Server potential.

SmartOrder - Index/Orders

Smart Orders introduce a more sophisticated way of implementing ISAM-emulating orders within a data model. They can be categorized into three different kinds:

ISAM Orders: These orders maintain 100% backward compatibility with traditional ISAM indexes, utilizing a special order column to represent the synthetic key value of ISAM indexes. This ensures that even user-defined functions in index expressions and complex FOR conditions work correctly.

Virtual Orders: use surrogate keys which are calculated on the fly, which include the record number as an additional ordering criteria. This ensures proper ISAM ordering, and scan and seek operations that are fully compatible with dbf/ntx/cdx orders. The requirement for a virtual order is that the key expression must contain only a single table column and the FOR expression, if given, is limited to a single logical column or to the Deleted() function. Because virtual orders use a real-time surrogate key, there is no re-indexing operation at the ISAM level as there is with ISAM orders and the order column. This results in a massive performance boost and a reduction in overhead for larger tables.

Pure Virtual Orders have no physical representation in the ISAM emulation table meta-model. They allow to dynamically set the order focus on any column using OrdSetFocus(<fieldname>) and perform database operations such as DbSeek(), Skip, or Filter based on that order. Since there is no metadata, FOR conditions are not possible. Note that this type of order can have a performance impact on tables with many rows. If there is already a tag name for an order that is identical to the column name, the existing order will be used.

Using SmartOrders

The PostgreSQL database engine as well as the upsizing wizard automatically decide on the best order strategy/kind whenever a new order is created withOrdCreate()/DbCreateIndex(). The order kind is only bound to the tag and not to the bag, so you can have different order kinds in the same index/order bag.

Remember, SmartOrders is an intelligent feature of PostgreSQL ISAM emulation related to orders/indexes that works automatically. To control or use SmartOrders, the following settings are available:

Use OrdInfo( PGORD_ISAM_KIND ) to determine the order kind of the active order in the selected workarea. It returns "isam", "virtual" or "pure-virtual"
The usage of SmartOrders can be disabled on the connection level. Simply use the NO-SMART-ORDER token in your connection string as shown below. This enforces that all operations use the ISAM order kind with its dedicated column-per-order representation. It is not recommended to disable SmartOrders.

oSession := DacSession():New("DBE=pgdbe;SERVER=localhost;DB=mdidemo;UID=postgres;PWD=postgres;NO-SMART-ORDERS") 

Upsize operations can also disable SmartOrders on a per-connection level with the connection declaration by adding the no-smart-order="true" attribute.

Upgrade existing Databases or Orders

When working with existing ISAM emulating PostgresSQL Databases which have been upsized with an Xbase++ Build #2158 or earlier, the existing meta information as well as order/index kinds need to be upgraded.

By default, the upgrade of the meta data in alaska-software.isam.orders is done automatically with the first connection.

A meta data upgrade adds the new kind column to the alaska-software.isam.orders table and sets the order kind of all existing orders to "isam". This only affects the meta table, not your data or order definitions.

However, the order kind of the existing orders is still "isam". This is to ensure that all changes to your tables data model are applied in a transparent manner. To enforce a automatic re-classification, you need to delete the order and re-create it as shown in the simplified code below.

// use table and order and collect order info 
USE Customers INDEX customers 
aOrders := {} 
FOR n:=1 TO Len(OrdList()) 
  AAdd( aOrders, { OrdBagName(), OrdName(), OrdKey(), OrdFor() } ) 

// destroy all indexes including removal of index columns 

// rebuild orders with automatic classification based on the SmartOrder rules 
FOR n:=1 TO Len(aOrders) 
  IF(!Empty( aOrders[n][4] )) 
    OrdCondSet( aOrders[n][4] ) 
  OrdCreate(  aOrders[n][1], aOrders[n][2], aOrders[n][3] ) 

To disable automatic meta data upgrade of your existing Database use the NO-META-UPGRADE token in your connection string. Of course, this also means your database can never use SmartOrders.

oSession := DacSession():New("DBE=pgdbe;SERVER=localhost;DB=mdidemo;UID=postgres;PWD=postgres;NO-META-UPGRADE") 

RemoteFilter - LocalFilter

RemoteFilter and LocalFilter are new in Xbase++ 2.0 build #2169. They have been carefully designed to provide 100% expression compatibility with the ISAM emulation as well as better transparency for defining filters.

Implicit remote filters and local filters

The new ISAM emulating filter concepts support implicit remote filters, local filters and explicit remote filters. The basic idea is as follows:

At first, the PostgreSQL ISAM emulation layer tries to patch the Xbase++ filter expression to be a valid SQL WHERE clause.
The patched filter expression which will become the SQL WHERE clause is validated in the context of the SQL table of the current workarea.
If the SQL WHERE clause expression validation was successful, the filter is applied and used in all future navigational operations. This filter type is called an implicit remote filter.
In case the SQL WHERE clause failed for syntax or semantic reasons, the original Xbase++ language-level filter is used as a codeblock to be executed by the application. This is called a local filter.

Based on the previous concept, the engine can always guarantee backward compatibility, even with the most complex filters. Of course, the local filter approach implies performance costs, specifically for large tables.

Explicit remote filters

The performance drawbacks of a local filter are addressed with the introduction of explicit remote filters. As previously outlined, local filters are a result of the inability of the PostgreSQL dbms to execute/understand the original Xbase++ filter expression. However, the PostgreSQL dbms has its own "expression" language and specialized operators which are enormously powerful. So why not use SQL in filter expressions for your upsized ISAM code to increase performance dramatically? The following Xbase++ code shows how an explicit remote filter is set up and used via the PGRemoteFilter() class.

USE customers VIA (oSession) 

oFilter := PGRemoteFilter():new("country=::country") 
oFilter:country := "Germany" 
DbSetFilter( oFilter ) 

oFilter:country := "US" 
DbSetFilter( oFilter ) 

1. First, we open the "customer" table via the "oSession" connection

2. Then we create a new filter "oFilter" object using the "PGRemoteFilter" class with the filter expression "country=::country"

3. Note that "::country:" is a placeholder for the value of the filter object member :country

4. Finally, "DbSetFilter(oFilter)" applies the "oFilter" to the current workarea context, affecting which records are visible

Of course, we could have use the "old fashioned" way to set filters via literals, but that's ugly and error-prone. Instead, we strongly recommend the usage of the parametrized expression as outlined in the previous code. The concept is identical to the concept of parametrized pass-through-SQL statements using DacSqlStatement().

Note also that when using a PGRemoteFilter(), any valid PostgreSQL syntax of a SQL WHERE clause expression is allowed. When setting a filter object using DbSetFilter(), the PostgreSQL ISAM emulation validates that filter and may raise a runtime error in case your filter is not valid for the PostgreSQL server in the context of the table referenced in the current workarea. The generated runtime error will have a detailed explanation as to why the SQL expression was not valid. Of course, you can catch that runtime error using a BEGIN/END SEQUENCE statement.

To determine the type/kind of a filter after it was set via DbSetFilter(), the function DbInfo( PGDBO_ISAM_FILTERMODE )can be used. It returns either "remote-filter-implicit", "local-filter" or "remote-filter-explicit".

User defined functions in RemoteFilters

In the following, we will outline a use-case for remote filters with UDFs and look into the drawbacks. Lets assume we have a "customer" table in the "erp_database" with the following columns:

- `customer_id` (integer)

- `last_active` (date)

- `total_spending` (numeric)

We will create a function to identify customers who have been active within the last N days and whose total spending exceeds a certain amount.

Step 1: Define the function named "is_active_spender". This function will take the customer's last active date and total spending as parameters, along with the threshold values for days and spending, and will return a boolean indicating whether the customer meets both criteria.

CREATE OR REPLACE FUNCTION is_active_spender( 
customer_last_active date, 
customer_spending numeric, 
days_threshold int, 
spending_threshold numeric 
RETURNS boolean AS $$ 
RETURN (current_date - customer_last_active <= days_threshold AND customer_spending >= spending_threshold); 
$$ LANGUAGE plpgsql STABLE; 

Step 2: We can directly use this function in a PGRemoteFilter() to filter customers from the table based on the defined criteria without needing any additional session-specific settings.

oFilter := PGRemoteFilter():new( "is_active_spender(last_active, total_spending, ::minDays, ::minRevenue)" ) 
oFilter:minDays := 30 
oFilter:minRevenue := 500 
DbSetFilter( oFilter ) 

The benefits of this approach is clear: we can define complex criteria rules on behalf of the SQL dbms, and these rules get executed on the server side and not the client side which typically leads to a much better performance.

This approach also has some type of flexibility as the user defined function can be changed without application changes, which is of interest in scenarios where the logic needs to be shared between Xbase++ and plain SQL systems.

While there are valid use-cases for embedding certain types of logic in the database (like data integrity constraints or simple data transformations) via a user defined function, it is generally advisable to keep complex business logic in the application layer. This approach enhances maintainability, testing, and scalability, and complies with modern practices that emphasize loose coupling and high cohesion within software architectures. Just think about the mess in terms of code maintenance and testing if your busíness logic is located both on the application and dbms layer.

A word about filter performance

Typically, remote filters, whether explicit or implicit, are fast. However, often the table has tens of thousands or even millions of rows, the amount of "visible" records is less than 50% of all available records or the table has many columns. In these cases, it is a good idea to add an SQL index - which is in no way related to an ISAM order/index.

First, determine the columns that will be used in the filter expressions. Suppose we have a customer table like in the example above and we use the country column. The following SQL code will create the SQL index.

CREATE INDEX country_filter ON customer (country); 

There is no need to change anything in the application. Just add the index to the data model using PGAdmin. Because it is an SQL index, it is automatically maintained by the SQL server. No need to reindex or worry about it. If the index is deleted on the server, the application will still work. Of course, it may work slower with filters that use the now un-indexed columns, but that is it.

Remember, efficient and proper index usage can greatly increase the performance of your filters/queries. But at the same time, indexing does come with some costs such as additional storage and potential slowdowns for INSERT, UPDATE, and DELETE operations. So, use them wisely according to the requirements and nature of your application.


If you see anything in the documentation that is not correct, does not match your experience with the particular feature or requires further clarification, please use this form to report a documentation issue.