Feeds:
Posts
Comments

The Big difference between Oracle BI Applications 7.9.X and 11g is the Integration or ETL tier. 11g no longer uses DAC or Informatica. Instead Oracle Data Integrator replaces those components. This cross-reference I created here is for those thinking of or moving to 11g from 7.9.X. Below, you will find a cross reference guide I created to map between the Informatica/DAC concepts to their ODI counter part(s). After the two cross-reference tables, there are 3 images used as a location reference for these ODI components.

 

DAC to ODI

DAC ODI Notes
Execution Plan Load Plan ODI has a Load Plan Generator, which acts like the Execution Plan generator in DAC.
Container Adaptor folders/topology mapping No concepts of containers for different sources. Instead ODI uses Adaptor folders along with logical and physical topology mappings using contexts.  BIA uses one context ‘global’.  There is only one reason to use or create others and that is when you have more than 1 source with the same product line version (PLV).  For example, 2 instances of EBS 12.1.3 that need to be sourced.  For this there is a whole different setup for this scenario.
Subject Area Fact/Dim Group ODI uses the same type of concept, except they use Load Plan Components and Fact Groups and Dimension Groups.
Tables Datastores Datastore(s) are categorized under models and sub-models.  They represent the tables/files that can be used as both source and targets.
Indices Constraints Indices are setup in ODI studio under the Models accordion of Designer tab.  When you traverse the model->sub-model->datastore, they will be there under the Constraints folder under the respective datastore (aka table).
Task Groups Load Plan Not to get too confusing; but, the main Load Plan executed for BIA will be generated from other load plans. Huh!?  Think of a Load Plan object in ODI as a definition that contains a hierarchy of steps that can be executed conditionally, in parallel or in series. You can build bigger load plans using other load plans as building blocks. Think legos.  Load plans generated by BIACM or LPG plugin use the smaller load plan definitions to build the main one.
Task Task Folder Tasks, in DAC, relate to the workflows (Incremental/Full).  In ODI, this is represented with Task folders which will hold the package(s), interface(s) and procedure(s).

 

 

Informatica to ODI

Informatica ODI Notes
Source/Targets Datastore Within ODI, no need to specify a table or file more than once as in Informatica. One needed to define table/file more than once as a source and then as a target in different adaptor folders.  Easy way to make have these table/file definitions get out of synch.  In ODI, this is easier and done only once with defining the Datastore(s).  Datastores can used as both source and targets and used in multiple Adaptor folders.
Mapping/Mapplet Interface These two different types of objects are all handled in ODI as interfaces.

An interface, in ODI, can either be a temporary interface or a main interface.  They are distinguished visible with either a yellow(temporary)  or blue (main) icon .  Yellow/temporary interfaces replace mapplets and blue/main interfaces replace mappings.

 

The Business Component (BC) mapplets are replaced with temporary interfaces with SQ suffix (Source Qualifier)

Transformations Interfaces This one is tricky.  Transformations in Informatica can be active or passive.  Meaning, active transformations can change number of rows that pass through it; passive, does not change number of rows.  Transformations in Informatica take N inputs in and M output ports.  In ODI 11g there is nothing like that except maybe temporary interfaces.  You could also make database functions and use as mapping expressions.  Just remember if you can write what you want in a SQL statement you can do the same with combination of both type of interfaces.
Parameter/Variables Variables In ODI, variables can have two scopes project or global.  Variables can have default values and also refresh SQL which is used to set variables and run time when used in packages.
Task Sessions There is no equivalent in ODI, nor need for it in ODI
Workflow Package Packages are made of sequence of steps organized in an execution diagram.  You can use interfaces, procedures and variables as steps.  For BIA, packages start refreshing and setting variables.  Then go to running the main (blue) interface with diagnostic and error handling steps.  Majority of the time there is only one interface being called here.  There maybe be multiple interfaces to handle dependencies and run orders.

 

 

Image 1: Adaptors, Task Folders, Packages and Interfaces

 

Picture1

 

 

Image 2: Models, Sub-models, Datastores, Indexes

 

Picture2

 

 

Image 3: Load Plan (Components) used to generate the main load plan

Picture3

 

A look at OBIA shift to use SCD1_WID in place of ROW_WID for Slowly Changing Dimensions

A little primer on Slowly Changing Dimensions (SCD).  Slowly Changing Dimensions are a star schema design principle and design pattern used to reflect how source data field changes are handled in dimension tables.  There are basically 5 types that I know of.  Of which 2 are the most prevalent and used in OBIA.  These 2 types are named Type 1 and Type2.  (The other 3 are Time-stamp, Type 3 and Hybrid).

  • Type 1: This type is used where no historical change tracking is necessary. Whenever a column value changes the dimension record is updated.   The column now reflects the current value and there is no trace of the old value. Always AS-IS view on the dimensional attributes and history of dimension not maintained.

 

  • Type 2: This type is used when historical change tracking is important. When the source data changes for a historical significant attribute, it will trigger for a new record to be created with these changes.  The old value is preserved.  In this type of SCD, columns can be either historically significant (SCD2 column) or not (SCD1 column).  The SCD2 columns trigger for new records, SCD1 columns will not and will just update the current record, or if it is setup to update all history.  Update all history, means these SCD1 columns will trigger an update for this column for all records for this particular unique dimension set.

 

Using Type 2 SCDs, one would or could have it that they can view dimensional records in two different views:  point-in-time or AS-IS.  Point-in-time is where the dimensional record at that events date is used.  So given a factual event at say June 1st 2010, we would see the snapshot or version of that dimension’s values on June 1st 2010. With AS-IS, we always see the most current values of the dimension.  So for that same example above, we would see the most current values of the dimension instead of the historical snapshot.

 

In OBIA versions 7.9.X and older, it always used point-in-time.  For someone that wished for the AS-IS view, there would be significant customizations and changes in the ETL and RPD. Oracle recognized this deficiency and changed its methodology starting in 11g.  In 11g, we now have point-in-time still OOTB; but, the switch to AS-IS is very simple and just a simple RPD change.  Welcome the SCD1_WID column!

 

Now for each Dimension table (_D), that can be setup for SCD2, there is a corresponding Type 1 table (_T1_D).  So for example, W_EMPLOYEE_D has a corresponding W_EMPLOYEE_T1_D table, as this dimension table can be setup for SCD2.  The type 1 table holds only one record for each unique dimension record, uniquely identified by datasource_num_id and integration_id.  It is here where the SCD1_WID value is born.  The type 1 table has a SCD1_WID column which is the surrogate key for the type 1 table and populated by a sequence number.  So for our example, W_EMPLOYEE_D which can have N records for say employee ‘John Smith’ there will only be one record for ‘John Smith’ in the T1 table.  This record will be uniquely identified by the SCD1_WID.  These T1 tables are very thin and typically only have the following 6 columns:

 

  1. DATASOURCE_NUM_ID
  2. INTEGRATION_ID
  3. SCD1_WID
  4. W_INSERT_DT
  5. W_UPDATE_DT
  6. ETL_PROC_WID

 

The T1 SCD1_WID values are used to populate the related dimension’s SCD1_WID column and also the conformed Facts to the dimension.  So the T1 tables are first populated, then the dimensions perform a lookup to their T1 table to get the SCD1_WID to populate its SCD1_WID column.  The facts then lookup the dimensions and use their SCD1_WID column for the WID foreign key fields, if the dimensions are SCD else it uses the ROW_WID value.

 

In OBIA 7.9.x and older, the dimensions ROW_WID was always used, as there were no SCD1_WID columns.  So facts always joined to dimensions via point-in-time, with no flexibility.  Now they use the SCD1_WID and offer the flexibility to change between the two views.

 

All of this is managed in the ETL layer within the ‘IKM BIAPPS Slowly Changing Dimension’.  This KM manages the T1 table as well as the SCD logic for the dimension.  It inserts records in the T1 table within the ‘Insert New Type 1 values’ step as shown below.

SCD IKM

The dimension’s SCD1_WID column is also populated by a lookup to the T1 table later on in another step within the same KM.  In essence this SCD IKM handles everything in terms of the SCD functionality and this T! table.

 

Within the facts’ SIL interfaces, they use this SCD1_WID column from the dimensions to populate the _WID fields.  It will use the ROW_WID for those dimensions that are not SCD, aka Type 1 dimensions.

 

The RPD physical join expressions used to join the facts to the dimensions via the ROW_WIDs, but now they utilize the SCD1_WID as seen below.

 

7.9.X

79_physical_join

 

11g

SCD1_physical_join

 

In the 11g out of the box RPD, it is setup to use the point-in-time view. Just as it did so in prior releases.  However, if one would want to use instead the AS_IS view it is just a simple RPD change.  Only a change in the physical join expression is needed.  I will use the W_WRKFC_EVT_MONTH_F -> W_EMPLOYEE_D join as the example:

 

Before:

W_WRKFC_EVT_MONTH_F.employee_wid = W_EMPLOYEE_D.scd1_wid and W_WRKFC_EVT_MONTH_F.effective_from_dt >=  W_EMPLOYEE_D.effective_from_dt and W_WRKFC_EVT_MONTH_F.effecitve_from_dt <  W_EMPLOYEE_D.effective_to_dt

 

After:

W_WRKFC_EVT_MONTH_F.employee_wid = W_EMPLOYEE_D.scd1_wid and W_EMPLOYEE_D.currrent_flg = ‘Y’

 

Just changing the expression pieces in bold, italicized dictates the dimensional view you wish to accomplish, point-in-time vs AS-IS.

So it seems that by default, BI Publisher uses CSS style sheets as opposed to inline CSS when generating reports in HTML format. This is perfectly fine as the reports render fabulously in our favorite browsers. However, in certain popular email clients like Gmail, Hotmail, MSN and so on not so well. The inline HTML reports look distorted. My personal experience with this is within Gmail and looking at tables. The tables are aligned but the table borders and color formatting is gone. So the Emailed report does not look polished or 100% presentable. It seems these Email clients don’t like CSS stylesheets and prefer CSS embedded in the HTML body.

There is a solution for this. However, it is not something easily setup in the front end via a property or setting. Also this solution will be a system wide setting for BIP for all HTML output generation. By embedding the CSS inline, the HTML message will now be larger in size. So that is the downside to embedding CSS inline.

As per an Oracle Master Note (DOC ID 1270136.1), what needs to be done is a modification to the “xdo.cfg” file located in the folder:

{MIDDLEWARE_HOME}/user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration
Here you will need to add the following property tag highlighted in bold within the properties element

<?xml version = ‘1.0’?>

<config xmlns=”http://xmlns.oracle.com/oxp/config/&#8221; version=”1.0.0″>

<properties>

   <property name=”html-css-embedding”>embed-to-element</property>
</properties>

<fonts>

</fonts>

</config>

Save the document and restart BI Services.

–(Updated/Revised 14-Jul-15)–

Overview: Working with a bursting BIP report to be sent out to X number of recipients. The bursting job never completes and stays in ‘Job processor completed’ status indefinitely. Looking at the job details, N number of recipients’ reports are created and delivered. M number, however, are stuck in ‘Rendering Report Document’ statuses. BIP job never completes.

Analysis: Looking into the issue. It is not clear what is causing this issue. Is it a memory issue? Is it stuck threads? Is there something wrong with the bursting definition query?

First course of action is to look at log files, specifically bipublisher.log, and bump up debug tracing level within BIP log configuration. To enable this:

  1. Login into Enterprise Manager
  2. In left pane, expand the folder ‘Application Deployments’
  3. Two options based on install
    1. Simple Install – right click on bipublisher(AdminServer)
    2. Enterprise Install– right click on bipublisher(bi_server1, bi_server2,…)
  4. From the pop-up menu, choose Logs-> Log Configuration
  5. In the right pane, open the log configuration window, showing the ‘Log Levels’ tab
  6. At the bottom of page, tick the checkbox ‘Persist log level state across component restarts’
    1. This ensures that the changed log level will persist across OBI restarts
  7. Change the Oracle Diagnostic Logging Level to TRACE:32(FINEST)
  8. Click Apply

The log files did not show anything before and after extended debugging. So the log files were not helpful.

I looked at both the memory and threads and did not see anything that would suggest this was an issue. I broke down the data set and bursting SQL query to one of the users/recipients that were having an issue. That sample user always had an issue! So by breaking nit down to test a smaller sample set and down to the one sample problematic recipient helped me isolate and make a decision that it was not memory or thread related.

I then evaluated the bursting SQL query definition. Thinking it must be it. But wait, nothing was wrong, as many others seemed to get their reports created and sent. So it must not be the bursting SQL query definition.

I then ran the report by itself for the test user having issues. That ran and produced a document! So what gives!? It can run by itself;but, bursting has issues.

I did notice in the stand alone run of the report two graphs were not being produced and just showing the running clock. Hmmm…

I looked at the dataset for that one recipients data. All looked good. Example:

<DATA_DS>

<G_1>

<NAME>FOO</NAME>

<ADDR>123 ELM ST…</ADDR>

</G_1>

</DATA_DS>

I looked at a comparable dataset for another user/recipient that ran fine. Here it is! Example:

<DATA_DS>

<G_1>

<NAME>FOO</NAME>

<AMOUNT>100.00</AMOUNT>

<ADDR>123 ELM ST…</ADDR>

</G_1>

</DATA_DS>

The problematic users where missing a few elements in their data set rows. Like, the AMOUNT element/attribute in the examples above. So, this was causing the BIP report engine to freak out and churn forever. Specifically, this was causing the gauge objects, within the BI Publisher Layout (XPT) format, to churn and cause this issue. Churn meaning showing the running clock symbol forever if running the report separate or the ‘Rendering Document’ status in bursting mode. So, if any data elements are null in the data set that are being referenced in the XPT report format for gauges, it is a problem!

The XPT (BI Publisher Layout) format does not have or allow conditional sections, like RTF format. The RTF format is a more flexible format in that regard; but I had to use the XPT format for this particular circumstance.

Now, as I specified in the first version of this document I thought it was due to the fact that the null values were not being added as tags and thus were missing. But that was not the case. I found out that there is a setting under the data model->Properties that will allow the data set results to add the xml tags/elements for any null values. This setting is a checkbox with label: ‘Include Empty Tags for Null Elements’. Checking this box will produce an XML like so, keep not of AMOUNT as that is the one that is null.

<DATA_DS>

<G_1>

<NAME>FOO</NAME>

<AMOUNT/>

<ADDR>123 ELM ST…</ADDR>

</G_1>

</DATA_DS>

 

I believe that the BIP XPT format should allow null elements or at least allow a setting default values for the gauge objects so that this error/issue would no longer exist. At the very least, throw an error or warning giving a clue to the issue.

Resolution/Summary:

Ensure that all data elements (aka columns) from the data set query return a value and not NULL, especially for metrics that will be used in gauge objects in the XPT format. My particular work around was to put in a NVL( ) in the aggregate functions to ensure they do not result in a NULL and default to a 0. For my particular issue, the data set SQL had an aggregate function which was returning NULL. Example:

SELECT

MAX(AMT) as AMOUNT

FROM

I just needed to modify accordingly and viola. The bursting jobs will complete and send to all recipients with out issue or get stuck.

SELECT

MAX(NVL(AMT,0)) as AMOUNT

FROM

While I was looking into a calculation issue today for a calculated percentage, let’s say the formula is ((b-a)/c)*100, the numbers just were not coming out correctly. Actually they were all coming out to 0!!! I know this was not right, so I investigated. I looked at my physical query and noticed OBI was creating that formula as FLOOR((b-a)/nullif(c,0))*100. Well, that explains it! Why was the BI Server adding this FLOOR function? It did not make sense to me. So how do I make or trick the BI Server to not do this. I asked my friend google. He had no answers. I then went to pal support.oracle.com. They did within Doc ID 1460809.1. As per Oracle, the cause was,

There was a change in the calculation engine in OBIEE 11g pertaining to how integer division is performed.  In OBIEE 10g, you could do integer division but it might not return an integer, you might get a decimal.  In OBIEE 11g this is no longer true, it always returns an integer.   As a result in DBFeatures in the repository, FLOOR_SUPPORTED is checked by default.

So there are 2 solutions:

  • Turn off FLOOR_SUPPORTED database feature. This would be a global change and I think too drastic
  • Within the Answers report do the following, wrap a cast( …. As double) around the numerator like so: (CAST(b-a AS DOUBLE)/c). The results where that the FLOOR was replaced with the cast-> cast(b-a as DOUBLE PRECISION)/nullif(c,0))*100

Cross drilling! Or in star schema design, otherwise known as “Drilling Across”. This is where you want to report off metrics from 2 or more fact tables in the same report. This is easy when your report is using all conformed dimension attributes. But in most cases, the requirement is for these fact tables’ metrics and some non-conformed dimension attributes.

In OBIEE, we typically solve this by making LBMs of all a fact table’s metrics to the non-conformed dimensions total/all level. This will then allow users to be able to build reports across different fact tables with conformed and non-conformed dimension attributes. What happens behind the scene is the following:

  1. A query is issued for each fact table, aggregating them to the common level of the other fact tables.
  2. The results, or datasets, of each fact table query from #1 above is merged/joined based on the conformed dimension attributes using a full outer join.

This is the correct behavior and solution for solving cross drilling, which OBIEE has been doing forever. Now as of 11.1.1.7.1, if you are now on that release, the full outer joins are not there! Instead, it is using Left Outer joins. This will alter all the reports that you have written that required cross drilling and have different results. So what happened!?

Researching this for a client whom upgraded from 11.1.1.5 to 11.1.1.7.x, I came across this issue and investigated. It threw me for a loop.   Upon investigating, I found support doc Doc ID 1601703.1. As per that doc, “There was a change in behavior in 11.1.1.7.1. The issue matches this change. Basically there was an inconsistency in how filters were working with grand totals and subtotals. As a result, full outer joins were replaced with left outer joins.

I guess it can work with left outer joins; but, more than likely this solution will not give you what you want as a result. First of all, before I get to the solution to re-introduce full outer joins, you may want to know what table is being used left of the left outer join. From my experimentation, the table with the most conformed dimensions seems to be the one on the left. So if fact table A has the most conformity, then the SQL would be A LEFT OUTER JOIN B ON (…).

The question then is how do I get back to the FULL OUTER join. The answer lies in the session variable FOJ_GRAND_TOTAL_LBM, FOJ standing for FULL OUTER JOIN. As the Oracle doc states there are 2 ways to resolve: REPORT ONLY or INIT BLOCK approach. The REPORT ONLY solution will only solve this for an individual report, and you would have to make changes to each report in which you wanted the full outer join to be used. The INIT BLOCK solution will make this change global.

REPORT ONLY

In the advanced tab of the report, enter the following within the PREFIX box:

SET VARIABLE FOJ_GRAND_TOTAL_LBM=1;

INIT BLOCK

  1. Create a new session variable in the RPD: FOJ_GRAND_TOTAL_LBM
  2. Set the default value of this session variable: 1
  3. Under Initialization Block, click (This is to create the dummy init block)
  4. Set the init block’s name: SET_FOJ_GRAND_TOTAL_LBM
  5. Within the init block, click the Disabled check box.
  6. Click OK, within init block.
  7. Click OK, within the session variable.

(Note: the default initializer is all that is needed to set the value; but, all session variables need a init block assigned. Thus the need to create the disabled init block)

Row-level security in OBI is a very powerful tool. It is used to limit data seen by users/groups/roles based on a business’s security model. It really is not hard; but, you need to understand when and how data filters will be applied. This is the key that is not documented thoroughly (or that I have seen by searching on the web for clarification). So I did the work to clarify it for you, or I hope I am in this writing. Note: I said it is easy, because once you understand how OBI works in this regard it pretty much is. The biggest issue, from my experience, is getting the business to define their security model and how they want all the scenarios to play out.

 

What I will be showing is a bunch of scenarios and what happens behind the scenes. Based on these scenarios and outcomes, one should understand what BI Server is doing and put that knowledge to use for setting up their security based on security model requirements.

 

 

Scenario 1: User is assigned 2 app roles, ‘A’ and ‘B’, both app roles are total unrelated (meaning no hierarchy or membership of each other). One of the app roles (‘A’) has security filters, the other (‘B’) does not.

Here I am adding filter on presentation column for “Company Name” = ‘Foo’ for app role ‘A’.

security1

I built a simple report including that presentation column to test.

Outcome: The security filters for app role ‘A’ are added to the query. Here is the query produced for the report, as you can see the filter was applied!

SELECT T528077.PROFIT_CENTER_NAME AS c2,
T556965.MCAL_PERIOD_NAME         AS c3,
T556965.MCAL_PERIOD_WID         AS c4,
T528077.ROW_WID                 AS c5,
SUM(T551274.BALANCE_GLOBAL1_AMT) AS c6,
T556965.ROW_WID                 AS c7
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_GL_BALANCE_A T551274
WHERE ( T528077.ROW_WID   = T551274.PROFIT_CENTER_WID
AND T538674.ROW_WID       = T551274.COMPANY_ORG_WID
AND T538674.ORG_NAME       = ‘Foo’
AND T538674.COMPANY_FLG   = ‘Y’
AND T551274.BALANCE_DT_WID = T556965.ROW_WID
AND T551274.LEDGER_WID    = T551773.ROW_WID )
GROUP BY T528077.ROW_WID,
T528077.PROFIT_CENTER_NAME,
T556965.ROW_WID,
T556965.MCAL_PERIOD_WID,
T556965.MCAL_PERIOD_NAME
Summary: For mutually exclusive app roles, meaning no hierarchical relationship, when a security filter is placed on one role and not the other the security filter placed on the one role will be applied implicitly to the physical query.

 

 

Scenario 2: User is assigned 2 app roles, ‘A’ and ‘B’, both app roles are total unrelated (meaning no hierarchy or membership of each other). Each app role has a security filter on the same object. The data filter may be on the same object or different.

Same Object Data Filter:

App Role ‘A’ has same filter as scenario 1. Just added filter for app role ‘B’, on “Company Name” = ‘BAR’.

security2

Different Object Data Filter:

App Role ‘A’ has same filter as scenario 1. Just added filter for app role ‘B’, on “Account Name” = ‘ACME’. Please note: this is on the same presentation object but the Data Filter expression is on different logical column.

 

security3

Outcome: If the data filters are on the same logical object, it will treat them in an IN clause. If the data filters are on different logical objects, it will treat as ORs.

Same Object Data Filter:
SELECT T528077.PROFIT_CENTER_NAME AS c2,
T556965.MCAL_PERIOD_NAME         AS c3,
T556965.MCAL_PERIOD_WID         AS c4,
T528077.ROW_WID                 AS c5,
SUM(T551274.BALANCE_GLOBAL1_AMT) AS c6,
T556965.ROW_WID                 AS c7
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_GL_BALANCE_A T551274
WHERE ( T528077.ROW_WID   = T551274.PROFIT_CENTER_WID
AND T538674.ROW_WID       = T551274.COMPANY_ORG_WID
AND T538674.COMPANY_FLG   = ‘Y’
AND T551274.BALANCE_DT_WID = T556965.ROW_WID
AND T551274.LEDGER_WID     = T551773.ROW_WID
AND (T538674.ORG_NAME     IN (‘BAR’, ‘Foo’)))
GROUP BY T528077.ROW_WID,
T528077.PROFIT_CENTER_NAME,
T556965.ROW_WID,
T556965.MCAL_PERIOD_WID,
T556965.MCAL_PERIOD_NAME
Different Object Data Filter:

SELECT DISTINCT T528077.PROFIT_CENTER_NAME AS c1,
T556965.MCAL_PERIOD_NAME                 AS c2,
T556965.MCAL_PERIOD_WID                 AS c3,
T528077.ROW_WID                         AS c4
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_PARTY_D T605218,
W_PARTY_ORG_D T605644,
W_STATUS_D T529754,
W_AP_XACT_F T532384
WHERE ( T532384.LEDGER_WID       = T551773.ROW_WID
AND T528077.ROW_WID               = T532384.PROFIT_CENTER_WID
AND T532384.COMPANY_ORG_WID       = T538674.ROW_WID
AND T556965.ADJUSTMENT_PERIOD_FLG = ‘N’
AND T532384.CUSTOMER_WID         = T605644.ROW_WID
AND T529754.ROW_WID               = T532384.DOC_STATUS_WID
AND T532384.MCAL_CAL_WID         = T556965.MCAL_CAL_WID
AND T605218.PARTY_ORG_WID         = T605644.ROW_WID
AND T532384.DELETE_FLG           = ‘N’
AND T538674.COMPANY_FLG           = ‘Y’
AND T532384.POSTED_ON_DT_WID     >= T556965.MCAL_DAY_DT_WID
AND (T538674.ORG_NAME     IN (‘Foo’)
OR T605218.NAME           IN (‘ACME’))
AND (T529754.W_STATUS_CODE IN (‘POSTED’, ‘REVERSED’)))
Summary: For mutually exclusive app roles (meaning no hierarchical relationship), when security filters are applied for both app roles on the same object, OBI treats them as additive. Meaning, they will either be written as ORs or added within an IN clause.

 

 

Scenario 3: User is assigned to only the ‘A’ app role. ‘A’ app role is made a member of the ‘B’ app role. Only the ‘B’ app role has a security filter on it.

Only app role ‘B’ has a filter on the presentation column “Company Name” = ‘Foo’. All security filters on app role ‘A’ where removed. So ‘B’ looks like:

security4

Outcome: App role ‘B’s security filter is applied to the query.

SELECT T528077.PROFIT_CENTER_NAME AS c2,
T556965.MCAL_PERIOD_NAME         AS c3,
T556965.MCAL_PERIOD_WID         AS c4,
T528077.ROW_WID                 AS c5,
SUM(T551274.BALANCE_GLOBAL1_AMT) AS c6,
T556965.ROW_WID                 AS c7
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_GL_BALANCE_A T551274
WHERE ( T528077.ROW_WID   = T551274.PROFIT_CENTER_WID
AND T538674.ROW_WID      = T551274.COMPANY_ORG_WID
AND T538674.ORG_NAME       = ‘Foo’
AND T538674.COMPANY_FLG   = ‘Y’
AND T551274.BALANCE_DT_WID = T556965.ROW_WID
AND T551274.LEDGER_WID     = T551773.ROW_WID)
GROUP BY T528077.ROW_WID,
T528077.PROFIT_CENTER_NAME,
T556965.ROW_WID,
T556965.MCAL_PERIOD_WID,
T556965.MCAL_PERIOD_NAME
Summary: In app role hierarchical relationships, even though a user is not assigned to the parent app role, they will inherit the security filters from the immediate parent app role that has security filters applied to it.

 

 

Scenario 4: User is assigned 2 app roles, ‘A’ and ‘B’. ‘A’ app role is made a member of the ‘B’ app role. Each app role has security filters on the same object. We will follow the same setup as Scenario 2, except the app roles are now hierarchical. We will have 2 separate sub scenarios, the data filter may be on the same object or different.

App Role ‘A’ has the following filter: “Company Name” = ‘Foo’.

security5

Same Object Data Filter:

App role ‘B’ has the following filter: “Company Name” = ‘BAR’.

security6

Different Object Data Filter:

App role ‘B’ has the following filter: “Account Name” = ‘ACME’. Please note: this is on the same presentation object but the Data Filter expression is on different logical column.

security7

Outcome: If the data filters are on the same logical object, it will only use the lowest app role in the hierarchy. For our example that would be app role ‘A’, as ‘A’ is a member of ‘B’. If the data filters are on different logical objects, it also only uses the data filter on the lowest app role.

Same Object Data Filter:
SELECT T528077.PROFIT_CENTER_NAME AS c2,
T556965.MCAL_PERIOD_NAME         AS c3,
T556965.MCAL_PERIOD_WID         AS c4,
T528077.ROW_WID                 AS c5,
SUM(T551274.BALANCE_GLOBAL1_AMT) AS c6,
T556965.ROW_WID                 AS c7
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_GL_BALANCE_A T551274
WHERE ( T528077.ROW_WID   = T551274.PROFIT_CENTER_WID
AND T538674.ROW_WID       = T551274.COMPANY_ORG_WID
AND T538674.ORG_NAME       = ‘Foo’
AND T538674.COMPANY_FLG   = ‘Y’
AND T551274.BALANCE_DT_WID = T556965.ROW_WID
AND T551274.LEDGER_WID     = T551773.ROW_WID)
GROUP BY T528077.ROW_WID,
T528077.PROFIT_CENTER_NAME,
T556965.ROW_WID,
T556965.MCAL_PERIOD_WID,
T556965.MCAL_PERIOD_NAME
Different Object Data Filter:

SELECT T528077.PROFIT_CENTER_NAME AS c2,
T556965.MCAL_PERIOD_NAME         AS c3,
T556965.MCAL_PERIOD_WID         AS c4,
T528077.ROW_WID                 AS c5,
SUM(T551274.BALANCE_GLOBAL1_AMT) AS c6,
T556965.ROW_WID                 AS c7
FROM W_MCAL_DAY_D T556965,
W_LEDGER_D T551773,
W_PROFIT_CENTER_D T528077,
W_INT_ORG_D T538674,
W_GL_BALANCE_A T551274
WHERE ( T528077.ROW_WID   = T551274.PROFIT_CENTER_WID
AND T538674.ROW_WID       = T551274.COMPANY_ORG_WID
AND T538674.ORG_NAME       = ‘Foo’
AND T538674.COMPANY_FLG   = ‘Y’
AND T551274.BALANCE_DT_WID = T556965.ROW_WID
AND T551274.LEDGER_WID     = T551773.ROW_WID )
GROUP BY T528077.ROW_WID,
T528077.PROFIT_CENTER_NAME,
T556965.ROW_WID,
T556965.MCAL_PERIOD_WID,
T556965.MCAL_PERIOD_NAME
Summary: In app role hierarchical relationships, the security filter that will be used will be the first one with a data filter going bottom up in the app role hierarchy. So let’s say you have three app roles A, B and C where A is a member of B, and B is a member of C. If A has security placed on it, it will use A’s security. If A does not have security on it nor does B; but, C has security C’s security will be used.

 

 

Now, knowing these behaviors and how OBI treats data filters on app roles and the app roles hierarchy, one can pretty much setup OBI to how they want security to act. For example, you have a user who belongs to N number of app roles. One of the app roles is very restrictive. What will happen? Well since the user is part of the restricted group he will be restricted. But, what if they belong to a group which should have no restrictions like a super user? This can be fixed in one or two ways.

Option 1: If there is no app role hierarchy setup, or you do not want app roles to be setup in a hierarchical manor, you can just add a simple data filter for the superuser app role on the same object(s) as the restricted app role but just put the expression: 1=1! This will always return to true. So looking back at scenario 2, OBI will make the security filters additive in OR logic. OBI is smart enough that it understands that 1=1 is always true. So if you add that to another relation via ORs it is always true. So OBI does not add any filters whatsoever. NO FILTER APPLIED!

Option 2: You can setup the superuser app role as a member of the restricted app role. Then add simple security filter on the same objects that are restricted with the expression: 1=1. As we saw with hierarchies from Scenario 4, the security filter at the lowest app role will be used. So in this case since superuser app role is a member of the restricted role and we placed an overriding security filter on it with 1=1, OBI will not place the filter. NO FILTER APPLIED! Now if we don’t place any security on the superuser role it will use the next security filter applied up the app role hierarchy. So in this case the restricted app roles security will be applied.

Follow

Get every new post delivered to your Inbox.