Thursday, June 5, 2025

How to fix Timezone issue with dates in Oracle Fusion BI Publisher reports

Typically, the timezone issues in Oracle Fusion BI Publisher reports often arise due to differences between the database timezone, BI Publisher server timezone, and user session timezone.

When you use dates in Oracle Fusion BI Publisher reports, either in where conditions to compare with effective dates or filtering out the data based on date parameters of the report, sometimes the report shows data pertaining to the previous day instead of today's date that was passed in parameter (or was the actual effective date).

The reason behind this is the conflict between the Timezone set at user profile level and the Timezone at BI server level.

By default, BI Publisher may use the database timezone, which can lead to discrepancies when users from different regions view reports.


There are a few ways to fix this problem.


Method 1 - Report Properties:


- Open the report definition


- Navigate to Properties


- Navigate to Formatting tab




- Scroll down to FO Processing section



- Find the option named Report Timezone and set it to User




- Save the report


- With this change, the report should work based on the user Timezone in all the dates. If the user A has Timezone set to EST then the output will be according to that, if another user B has Timezone set to CST then that output will be based on Central Timezone.



Method 2 - Data Model:


- If one wants BI server to use a dedicated Timezone in all cases, then the configuration has to be made at Data Model level


- Navigate to the desired data model of the report


- Edit the query and modify the reference to the date column as shown below. I'm using SYSDATE as the basis column for my example and Central Timezone for my example.



SELECT
    to_char(CAST((from_tz(CAST(sysdate AS TIMESTAMP),
                          '+00:00') AT TIME ZONE 'America/Chicago') AS DATE),
            'DD-MON-YYYY') AS formatted_date
FROM
    dual





- View the output and we should see the date will be correctly shown based on Central Timezone






Method 3 - Template:


- If the timezone issue is happening only with the displayed format then we can update the BI Publisher RTF or XSL template to apply the correct timezone format.

- RTF Template: We have to use <?format-date:....> function to format dates correctly.

For example:

<?format-date:CREATION_DATE;'MM/dd/yyyy HH:mm:ss z';'America/New_York'?>


- XSL Template: We can use format-dateTime function to format dates correctly.

For example:

<xsl:value-of select="format-dateTime(CREATION_DATE, '[MM/dd/yyyy HH:mm:ss z]', 'America/New_York')"/>



Additional Checks:


In addition to all above methods, it's also recommended to verify if the Fusion application environment is reflecting the correct timezone or not.

To ensure this, check below steps:

- Navigate to Setup and Maintenance.


- Search for Manage Administrator Profile Values.



Ensure the values for below ones are set correctly:


FND_TIMEZONE





FND_SERVER_TIMEZONE





FND_TERRITORY





Share:

Tuesday, December 3, 2024

How to Implement Regular and Context Sensitive Descriptive Flexfields (DFFs) in Oracle Fusion Cloud

Descriptive Flexfields (DFFs) in Oracle Fusion Cloud are a powerful feature that allows users to extend ERP Cloud by adding custom fields to capture additional information without altering the underlying application framework. They are designed to be flexible and can be configured without programming changes.

Key Features of DFFs in Fusion Cloud:

Customizability: Add fields to capture additional data relevant to your business processes.

Flexibility: Enable or disable fields based on specific criteria.

Non-Technical approach: Configure fields through the application’s user interface without doing much technical work.

Context Sensitivity: Create context-sensitive fields that display based on specific conditions.

Segmentation: Define multiple fields within a single DFF.


Let's see the steps to configure DFF in Oracle ERP Cloud.

- Navigate to Setup and Maintenance



- Search for 'Manage Descriptive Flexfields'


- Let's take an example of Invoices. Search for Invoices.




- Click on Edit

- Under Global Segments section, Click + sign to configure a new field



- Let's create a new field using ATTRIBUTE1 and name it My Description. I'm using 30 Characters valueset as I want it to be a free text.

Let's provide relevant values for Prompt and Display Type.

Enable BI Enabled checkbox if you wish to enable it for use in Business Intelligence reports.




- Save and Close

- Let's add two more fields by following the above process -

My Date:



DS Comments:



- By now, out Global Segments should look like this -



- At this moment, we have configured the regular DFFs using Global Segments for Invoice form.

Configuring Context Sensitive DFFs in Oracle Oracle Fusion Cloud:

Context sensitive DFF lets us use the same underlying database column by different segments in different contexts.

For example, we can define a Location context that uses the following attributes:

ATTRIBUTE1 column for city

ATTRIBUTE2 column for state

ATTRIBUTE3 column for zip

We can also define a Contact context that uses the same columns for other attributes:

ATTRIBUTE1 column for email

ATTRIBUTE2 column for phone

ATTRIBUTE3 column for fax


Let's see how to configure Context Sensitive DFFs in Oracle Fusion

- Click on Manage Contexts button


- Click + Icon

- Create a new Context named My Group 1

- Check the Enabled checkbox



- Similarly, create another context named My Group 2



- Now, let's create three Segments under My Group 1 Context as shown below:



- And similarly, create two different Segments under My Group 2 Context as shown below:


- At this moment, we have created two contexts and configured different Segments under each context.


- Now, we need to deploy our DFF.


- Click on Deploy Flexfield button



- Once successfully deployed, click OK to close the popup



- Now, it's time to verify how our DFF segments appear on Invoices screen

- Navigate to Payables and Create a new Invoice

- Click Show More and navigate to Additional Information tab




- We can see our Global Segments as shown below -



- Now let's navigate to My Group drop-down. This is our Context Sensitive DFF.

- Let's select My Group 1, and we will see the segments pertaining to this specific context are shown: 



- Let's select context My Group 2 and we will see the segments pertaining to this context are shown:




Conclusion:

This way we can configure the regular (Global) as well as Context Specific DFFs in Oracle Fusion. DFFs help us configure additional information in ERP without the need of any technical configurations and/or coding. With Context Sensitive DFFs, we can use the same set of ATTRIBUTE columns in multiple contexts facilitating capturing of variety of additional information required for business.


Share:

Monday, September 23, 2024

How to Configure Spatial Geometry Objects in Geolocation Maps in Oracle APEX

In this post, we saw how to implement Geolocation Maps in Oracle Apex. In many use cases, you may need to draw certain features using coordinate and geometry information. Spatial Geometry objects can be used in such cases to make the geoprocessing easier.


Spatial geometry objects are drawn on the background map based on data in a table or returned by our query. Apex Maps support a rich set of built-in marker icons and shapes. Below are the supported spatial geometry objects in Apex Maps Region:


Points - Points display as markers e.g. user's current location or any specific point on the map.

Lines - Lines represent features like roads or it can be line drawn between variety of points.

Polygons - Polygons represent areas like states or countries or it can be drawn to show any other areas on the map.

Heat Map - Heat Maps are used to visualize the point density.

Extruded Polygons - These are displayed as three-dimensional, extruded Polygons. The height of the 3D object is based on a Numeric column value returned by our query.


Spatial geometry objects can be based on below sources:

Geometry Column - Supported datatypes include SDO_GEOMETRY, VARCHAR2, or CLOB. VARCHAR2 and CLOB columns must contain geometry information in GeoJSON format.

Numeric Columns - Columns representing longitude and latitude values but this option applies only to Point and Heat Map objects.


Let's take a took at each of these Spatial Geometry Objects and how we can use them in Apex Maps Region.



1. Points:

- Let's create a Layer in our Map

Layer Type: Points

As we learned above, Points layer can be based on Numeric values one for Latitude and other for Longitude. So let's use a simple query with sample values indicating a specific point on the map.


SQL Query:

SELECT TO_NUMBER('40.79261211884605') P_LAT 
      ,TO_NUMBER('-73.95252971760179') P_LONG
FROM DUAL




- Let's specify fa-location-arrow in Icon CSS Classes under Point Objects section.




- Run the App.


- As we can see the Point is denoted by an Arrow at the exact point based on Latitude and Longitude.






2. Lines:

- Let's create a new Layer in our Map

Layer Type: Lines

Lines layer needs GeoJSON format as the source. So let's build a query to generate data in GeoJSON format.
This will consist of any two points on the map (from and to) and their corresponding Latitude and Longitude values.


SQL Query:

SELECT 
'
{
"type": "LineString",
"coordinates": [
[
-73.95252971760179,
40.79261211884605
],
[
-73.95345327758079,
40.791114880267148
]
]
}
' geojson_val
FROM DUAL




- Let's specify the line properties:

Stroke Style: Dotted

Stroke Color: Red

Stroke Width: 7





- Run the App.


- As we can see a dotted Line is drawn over our map in Red color based on our coordinates.

We can also see a new check-box has been added on the map indicating controls for Line layer.








3. Polygons:

- Let's create a new Layer in our Map

Layer Type: Polygons

Polygons layer needs GeoJSON format as the source. So let's build a query to generate data in GeoJSON format.
This will consist of a number of points on the map (forming a Polygonal shape) and their corresponding Latitude and Longitude values. I'm forming the Polygon using 5 points on the map.


SQL Query:

SELECT 
'
{
"type": "Polygon",
"coordinates": [
[
[
-73.9684379656287,
40.78569943754278
],
[
-73.9629016692047,
40.78849384000406
],
[
-73.95766551865343,
40.789760807246765
],
[
-73.95895386999608,
40.783522052350584
]
,
[
-73.96569143411632,
40.7829049598581
]
,
[
-73.9684379656287,
40.78569943754278
]
]
]
}
' geojson_val
FROM DUAL







- Let's specify the Appearance properties:

Fill Color: Grey

Fill Opacity: 0.5

Stroke Color: Red







- Run the App.


- As we can see a Semi-Transparent Polygon drawn over our map in Red color based on our coordinates.

We can also see a new check-box has been added on the map indicating controls for Polygon layer.






4. Extruded Polygons:

- Let's create a new Layer in our Map

Layer Type: Extruded Polygons

Polygons layer needs GeoJSON format as the source. So let's build a query to generate data in GeoJSON format.
This will consist of a number of points on the map (forming a Polygonal shape) and their corresponding Latitude and Longitude values. I'm forming the Polygon using 4 points on the map to show a building in 3D format.

Here' we can see the query also has another column E_VAL with value 70 . This indicates the Extrusion Value for this layer which is basically the value to determine the height of the extruded polygon.


SQL Query:

SELECT 
'
{
"type": "Polygon",
"coordinates": [
[
[
-73.95243322590251,
40.79222980941566
],
[
-73.95211869237345,
40.79209869069768
],
[
-73.95250182333042,
40.79162315684402
],
[
-73.95284231859179,
40.791672689498604
]
,
[
-73.95243322590251,
40.79222980941566
]
]
]
}
' geojson_val
,70 E_VAL
FROM DUAL






- Let's specify the Appearance properties:

Fill Color: Lavender

Extrusion Value Column: E_VAL (This is the column returned by our query)

Unit: Meter

Fill Opacity: 0.5







- Run the App.


- As we can see a Semi-Transparent Polygon drawn over a building in Lavender color based on our coordinates.

We can also see a new check-box has been added on the map indicating controls for Extruded Polygon layer.






This is how we can leverage various Spatial Geometry Objects in Oracle Apex Map Region to draw a variety of objects over our Geolocation Maps.





Share:

Friday, August 9, 2024

Understanding the Data Flow Process in Oracle Cloud Infrastructure (OCI) Data Integration

In today's data-driven world businesses need robust solutions to manage, process, and analyze vast amounts of data efficiently. Oracle Cloud Infrastructure (OCI) Data Integration is a powerful service designed to address these needs. With OCI Data Integration, (OCI-DI) organizations can seamlessly integrate data across various sources, load and transform it into meaningful insights, and drive informed decision-making.

OCI-DI offers a streamlined approach to Extracting, Transforming and Loading data across various sources. This involves several steps, each designed to ensure data is accurately extracted, transformed, and loaded into the desired destination.

Let's take a look into the data flow process in OCI-DI.


1. Data Extraction

The data flow process begins with the extraction of data from various sources. This step involves identifying and accessing the relevant data sources, which could include:

- Databases: Oracle databases, third-party databases (such as MySQL, PostgreSQL, SQL Server, Snowflake), and other on-premises or cloud databases.

- SaaS Applications: Data from various Software as a Service (SaaS) applications, such as Oracle Fusion Cloud (BICC and BI Server), Peoplesoft, Salesforce, and others.

- File Storage: Structured or unstructured data stored in files within OCI Object Storage, HDFS, Hive as well as Amazon S3.

OCI-DI offers pre-built connectors that simplify the extraction process, allowing users to seamlessly connect to different data sources and retrieve the necessary data.


2. Data Transformation

Once data is extracted, it undergoes transformation to ensure it is in the correct format and structure for analysis. This stage is crucial for cleansing, enriching, and converting raw data into meaningful insights. Below are the key transformation activities in OCI-DI:

- Data Cleansing: Removing duplicates, applying filters, correcting errors, and ensuring consistency in the data.

- Data Enrichment: Adding additional information or context to the data to make it more useful for analysis.

- Data Aggregation: Summarizing data to generate metrics or insights, such as calculating totals, averages, or other aggregate values.

- Data Mapping: Converting data from one format to another, aligning it with the schema and requirements of the target system.

OCI-DI provides a rich set of transformation tools and SQL-based expressions to perform complex transformations efficiently.


3. Data Loading

The final step in the data flow process is loading the transformed data into the target destination. This could be a data warehouse, data lake, or another storage solution where data can be analyzed and used for decision-making. Below are some commonly used target destinations in OCI-DI:

- OCI Data Warehouse: For storing and analyzing structured data using Oracle Autonomous Data Warehouse (ADW).

- OCI Data Lake: For storing large volumes of structured, semi-structured, and unstructured data using OCI Data Lake.

- Any other on-prem Oracle Database.

- External Systems: Other cloud storage solutions, third-party databases, or on-premises systems.

OCI-DI ensures that data is loaded efficiently and accurately, maintaining data integrity throughout the process.


4. Scheduling and Orchestration

To ensure that data flows are executed regularly and efficiently, OCI-DI provides tools for scheduling and orchestration. This involves:

- Task Scheduling: Setting up schedules for Data Flow Integration Tasks to run at specified times or intervals, ensuring data is updated and synchronized regularly.

- Orchestration using Pipelines: Building Pipelines to coordinate multiple data flows and processes to run in a specific sequence, ensuring dependencies are managed, and workflows are executed smoothly.


5. Monitoring and Management

Effective monitoring and management are crucial for maintaining the reliability and performance of data flows. OCI-DI offers comprehensive monitoring and management capabilities, including:

- Real-Time Monitoring: Tracking the status and performance of data flows in real time, allowing users to identify and address issues promptly.

- Alerts and Notifications: Setting up alerts to notify users of any failures, delays, or anomalies in the data flows, ensuring quick resolution of issues.

- Logging and Auditing: Maintaining detailed logs and audit trails of data flow activities for troubleshooting, compliance, and governance purposes.


Main Components of OCI-DI service:

1. Data Assets:

Data Assets represent the data sources that we want to use in Data Integration data flows, tasks, and pipelines. They help us in connecting to various systems such as BICC, Fusion BI Server, Object Storage, Oracle Cloud Databases such as ADW as well as any external on-prem Oracle databases.

Data Assets for RDBMS:



Data Assets for File and Big Data:




Data Assets for Application:




Data Assets for Others:





2. Data Flows:

A data flow is a visual program representing the flow of data from source data assets, such as a database or flat file, to target data assets, such as a data lake or data warehouse.

Sample Data Flow:




3. Tasks:

A task is a design-time resource that specifies a set of actions to perform on data. Tasks can be of below types:

- Data Loader: Lets us load diverse data into data lakes and data warehouses.

- Integration: Let's us take a data integration flow design and prepare it for running.

- OCI Data Flow: Let's us run an existing application in OCI Data Flow from within OCI-DI

- Pipeline: Let's us run a Pipeline.

- SQL: Let's us run custom SQL from within a pipeline.

- REST: Let's us run a REST API endpoint in a pipeline.




Sample Integration Task:



Task Scheduling:

Once a task is published to an Application in OCI-DI, we run the task manually from the application or create a task schedule to automate the runs.

Sample Task Schedule:





4. Pipelines:

A pipeline is a design-time resource in Data Integration for connecting tasks and activities in one or more sequences or in parallel from start to finish to orchestrate data processing. We can invoke various Data Flow tasks, Data Loader tasks or even SQL and REST Tasks in a Pipeline to meet the requirements of the orchestration/ data integration.

Sample Pipeline:





Conclusion

The Extract-Transform-Load process in Oracle Cloud Infrastructure (OCI) Data Integration is designed to be efficient, scalable, and robust. OCI-DI seamlessly integrates data from various sources, transforming it into meaningful insights, and loading it into the appropriate destination. This empowers organizations to make data-driven decisions. Be it large volumes of data or complex data integration requirements, OCI-DI offers the tools and capabilities needed to streamline the data workflows and unlock the full potential of the data.


Share: