Quantcast
Channel: Oracle for All
Viewing all 144 articles
Browse latest View live

OBIEE Briefing Book

$
0
0

A Briefing Book is a collection of static or updatable snapshots of :

  1. an Oracle BI Interactive Dashboard.
  2. of an individual analyses (Answer)
  3. and BI Publisher reports (for 11G)

It allows that content to be viewed by anyone with Briefing Book reader software.

The Briefing Book provides a way to see content offline, or share it with others.

Briefing Books have the same look and feel as a dashboard page. Multi-page Briefing Books have paging controls and are well-suited for presenting information to others.

Briefing Books provide a way to archive the information in a dashboard and can be saved locally on a user’s desktop.

You can download briefing books in PDF or MHTML format for printing and viewing. You also can update, schedule, and deliver briefing books using agents.

To create an Oracle BI Briefing Book

  1. First we have to check the privileges for the briefing book.
    2. Click on Administration à Manage Privileges (under security)
    3. Check under Access à Access to briefing books (right side).

    If not add the user or role to give rights to access the briefing book.

    4. Go to Briefing book section under Manage privileges.

    Check the accessibility, there will be 2 options like –

    1. Add to or Edit Briefing Book – Access privileges. (Users and Roles)

    It is used to add more than one report to the same briefing book and we can edit the briefing book name.

    2. Download Briefing Book – Access privileges. (Users and Roles)

    It is used to download the created briefing book as PDF to view all the reports as updatable or snapshot.

    5. Go to dashboard, click on page option à Edit Dashboard.

    6. Click on tools à under that click on page report links

    7. Check in “Add to briefing Book”

    8. Save and run the dashboard. In right side go to page option and click on “Add to Briefing Book”.

    9. Once click on “Add to Briefing Book” Option it will ask for Content type, location, Navigation links.

    Portal Name: KPI – It shows the page name under dashboard.

    Content type: Updatable – Is used to get update information of the page when we refresh.

    Snapshot – Static present data of information as image.

    Follow Briefing Book Navigation Links – In the briefing book we have the contents list when we click on the contents particular it will go for the topic on the page as linking.

    Number of links to follow: it is used to contents and its links.

    Description: Describe about the page for Briefing book.

    Location: browse the location where the briefing book going to be located I mean saving.

    Here I am using content type as “Updatable” and Navigation Links “Yes”.

    Browse Location to save the Briefing Book.



    Click ok to save.

    Information will display as “Successfully added to Briefing book”.

    10. Edit the dashboard, on the left side top pane drag the folder link to dashboard right side to column or section.

    Edit the folder and assign name as Briefing Book and also locate the already save briefing book folder.

  2. Browse for Location of already saved briefing book folder.
    Click ok and save, check in dashboard page.

    So newly added KPI is displayed in Financial Briefing book, click it to download.

    11. Right Click on KPI and click on PDF to download the Briefing in PDF format.

    12. Opening KPI dialogue box will get open to save the KPI as PDF. Click ok.

    13. Table of contents will get display first in the PDF with refresh date and time.

    14. Click on KPI it will go for the particular of reports of KPI.

    15. We can all the reports to single Briefing book by clicking Add to Briefing book below of every report.

    16. Click Add to Briefing Book, locate the location of briefing book folder and assign the content types, navigation links, and description and click ok.

    Click ok. Check in Briefing book.

    You will get the list of dashboard page on the PDF, initially we had KPI now we added Cost comparison by property report.
    So two contents and its reports will be saved in Briefing book download and check.

    17. Click on comparison to check the report, you can check the date and time of adding the briefing book as updatable.

    18. You can edit the Briefing Book; by right click on the briefing book click on edit.

    19. We can change the KPI or any briefing book from updatable to snapshot and navigation links from yes to no, can change description.

 

Another Example to Create Briefing Book

To create an Oracle BI Briefing Book

  1. Navigate to a dashboard in Oracle BI Interactive Dashboards and then perform one of the following actions:
    • Click Page Option and then click the Add to Briefing Book button, available as 4th option of Page Option in dashboard page.   

NOTE:  This button is not available on an empty dashboard page. Click the Add to Briefing Book link that appears with an individual request on the dashboard.

  1. Once we are clicked with ‘Add To Briefing Book’ New pop up window will open like below;

 For Content Type, choose one of the following options:

  • Snapshot. This adds the content in its current state. Snapshot content preserves the original data and is not updated when the briefing book is rerun. Snapshot content will not be updated using Oracle BI Delivers.
  • Updatable. The content is refreshed whenever the briefing book is downloaded, or when it is specified as the delivery content for an iBot in Oracle BI Delivers.

 For Follow Briefing Book Navigation Links, choose one of the following options:

  • No. Briefing book navigation links will not be followed.
  • Yes. Briefing book navigation links will be followed.
  1. Then click ‘Browse’ with above screen to add this Dashboard page to Briefing Book. If Briefing Book is already created then Select that book and add this page otherwise browse to the location where you want to save the briefing book and give the new name for briefing book and click ‘OK’ to save.
  1. And then click ‘OK’ with next Screen

We will be getting a confirmation popup window like below after successful creation of Briefing Book

  1.  Click the Cancel button to return to Oracle BI Interactive Dashboards.

This creates an empty briefing book. The briefing book folder appears in the selection    pane in Oracle BI Answers and Oracle BI Delivers.

To add additional Dashboard pages add the same briefing book, Click the Add to Briefing Book link or button, and then select the Briefing Book which we have created or create new Briefing Book using the preceding steps.

  1. If you wish to add only particular saved request from the dashboard not the entire page then Follow the below steps. Click the Page Option from the Dashboard and then click ‘Edit Dashboard’ Option
  1. Once you are edited the dashboards then go to the section where the saved requests or analysis is available which you want to add to the briefing book. Click Section Properties and then Click ‘Report Links’ Options

From the ‘Report Links’ Option, select customize  and then select ‘Add to Briefing Book’ option and then Click ‘OK’.

Now Save and Run the dashboard.

  1. Now if you are looking below the saved requests or analysis in dashboard you could able to see ‘Add To Briefing Book’ option. Click that option and follow the   same steps to add that report to Briefing Book

Congratulations !!! With the above steps you have successfully created Briefing Book in OBIEE 11g.

Adding Briefing Book in Dashboard

  1. After successful creation of Briefing Book, Edit the dashboard page where you want to add the Briefing Book. Drag ‘Folder’ object from Dashboard Objects to the Column or Section or new Page where you want to add the Briefing Book
  1. Once we are clicked ‘Folder’ Properties ‘Folder Properties’ pop up will open

Browse the folder where we have saved the briefing book with the ‘Folder Properties’ Window and select ‘Expand’ Option.

  1. Now Save and Run the dashboard and check the changes in the dashboard. You will be able to see the newly added Briefing Book in the dashboard.
  1. OBIEE 11g is supporting only .mht and PDF format to download the Briefing book. Right click on the Briefing book from dashboard and select further option to edit or download the briefing book.

The post OBIEE Briefing Book appeared first on Oracle for All.


What is nohup command in Linux

$
0
0

When you execute a Unix job in the background ( using &, bg command), and logout from the session, your process will get killed. You can avoid this using several methods — executing the job with nohup, or making it as batch job using at, batch or cron command.

This quick tip is for beginners. If you’ve been using nohup for a while, leave us a comment and tell us under what situations you use nohup.

In this quick tip, let us review how to make your process running even after you logout, using nohup.

Nohup stands for no hang up, which can be executed as shown below.

nohup syntax:

# nohup command-with-options &

WNohup is very helpful when you have to execute a shell-script or command that take a long time to finish. In that case, you don’t want to be connected to the shell and waiting for the command to complete. Instead, execute it with nohup, exit the shell and continue with your other work.

Explanation about nohup.out file

By default, the standard output will be redirected to nohup.out file in the current directory. And the standard error will be redirected to stdout, thus it will also go to nohup.out. So, your nohup.out will contain both standard output and error messages from the script that you’ve executed using nohup command.

Instead of using nohup.out, you can also redirect the output to a file using the normal shell redirections.

Example: Printing lines to both standard output & standard error

while(true)
do
echo "standard output"
echo "standard error" 1>&2 
sleep 1;
done

Execute the script without redirection

$ nohup sh custom-script.sh &
[1] 12034
$ nohup: ignoring input and appending output to `nohup.out'

$ tail -f nohup.out
standard output
standard error
standard output
standard error
..

Execute the script with redirection

$ nohup sh custom-script.sh > custom-out.log &
[1] 11069
$ nohup: ignoring input and redirecting stderr to stdout

$ tail -f custom-out.log
standard output
standard error
standard output
standard error
..

If you log-out of the shell and login again, you’ll still see the custom-script.sh running in the background.

$ ps aux | grep sathiya 
sathiya  12034  0.0  0.1   4912  1080 pts/2    S    14:10   0:00 sh custom-script.sh

The post What is nohup command in Linux appeared first on Oracle for All.

OBIEE Resetting BISytemUser password in OBIEE 11g

$
0
0
 BISystemUser by default is the user that is used as an inter-bi-component communication user, this could also be used when Impersonation is used. This is refferenced by an Authenticator ( usually Defaault Authenticator unless changed to different providors like Active Directory or other directories ).

The credentials for this user are managed via cwallet.sso which is the default credential store under oracle.bi.system – system.user. The BISystemUser does not need any Group membership , however it would need Weblogic Global Role called ‘Admin’ [ P.S – This is not an ‘Application Role’ by any means ]. By default BISystemUser is a member of an LDAP Group called ‘Administrators’ which is assigned to the Weblogic Global Admin Role.
OracleSystemUser is used by Oracle Web Services Manager (OWSM) which is integrated with WLS EM Console to provide the management and securing of web services through administration of policies.By default OracleSystemUser is a member of OracleSystemGroup in Weblogic LDAP. This is also refferenced via Default Authenticator this could be changed by following the FWM documentation.

 

More information could be found :http://docs.oracle.com/cd/E21764_01/bi.1111/e10543/privileges.htm

To reset BISystemUser:

1. Stop the system components in Enterprise Manager.
Click on Business Intelligence >Core application> Availability
2. Log into Weblogic Console and change the BISystemUser password.
Click on security realms > myreams > user and group
BISystemUser > Passwords
3. Change password in EM:
Weblogic Domain > right click on bifoundation_domain > Security > Credentials > oracle.bi.system > system.user > Edit > change the password

 

4. Start BI System components from Enterprise Manager.
Click on Business Intelligence >Core application> Availability

5. Wait for 10 mins

6. Try the new password in the OBIEE URL.
 If you configure Oracle BI to use an Active Directory , OID etc authentication providers, then you must select a user from MSAD to use for this purpose and give that user the required permissions. You can create a new user in MSAD for this purpose or use a pre-existing user. You give the chosen user the permission they need by making them a member of the pre-existing BISystem Application Role.
Once you have removed the default BISystemUser from the Default Authenticator because you wanted to configure external LDAP store. You need to create another user for BISystemUser and Whilst configuring this user keep in mind of the following considerations that could cause authentication failures:
1. The BISystemUser which is created in the external LDAP (Active Directory or any third party user directory),  the user configuration in MSAD is should not be configured as “Reset Password on First Login” since there is not reset login screen when OBIEE is trying to use this user for its interal communication purposes.
2. OBIEE cannot handle special non-alphanumeric characters in the password.  See BUG 11880111 – password restrictions for bisystemuser, for more information.
3. Make sure the external BISystemUser in an external LDAP password and the account should be set to NEVER expire else you cannot login to OBIEE.
4. Make sure you have assinged correct roles and your BISYSTEM and system.user password are always synchronised.
5. If you have changed the password of this account but not updated the credential store with the new credentials (or have not restarted the system afterwards) authentication will fail.

The post OBIEE Resetting BISytemUser password in OBIEE 11g appeared first on Oracle for All.

OBIEE Accidentally deleted BISystem Role

$
0
0

There was a time once where someone deleted BISystem Role from EM accidentally and all the hell broke loose and NO one was able to access OBIEE thru any of the authentication providers ( Default, AD , SSO) and it took a while to figure out until the system_jazn_data.xml’s from Prod and Test were compared side by side.

When you check the log files you would see errors like this:
nqsever.log
[2013-02-14T13:22:45.000+00:00] [OracleBIServerComponent] [ERROR:1] [] [] [ecid: cb5a346296ed2a97:-7393df37:13cda108fa1:-8000-0000000000001ff6] [tid: 45bda940] [nQSError: 43126] Authentication failed: invalid user/password.
[2013-02-14T13:28:32.000+00:00] [OracleBIServerComponent] [NOTIFICATION:1] [] [] [ecid: 004pRGflGvX3NA3_zlG7yW0002vb000000] [tid: 45ddc940] User OBIS spent 33.000000 milliseconds for http response when authenticateWithLanguage
[2013-02-14T13:28:32.000+00:00] [OracleBIServerComponent] [NOTIFICATION:1] [] [] [ecid: 004pRGflGvX3NA3_zlG7yW0002vb000000] [tid: 45ddc940] User OBIS spent 0.000000 milliseconds for xerces parsing when authenticateWithLanguage
[2013-02-14T13:28:32.000+00:00] [OracleBIServerComponent] [NOTIFICATION:1] [] [] [ecid: 004pRGflGvX3NA3_zlG7yW0002vb000000] [tid: 45ddc940] The response for user OBIS during authenticateWithLanguage is: env:Receiveroracle.bi.security.service.SecurityServiceException: SecurityService::checkSystemUserPermissionsSystem user has not been granted required permission oracle.bi.server.impersonateUser

sawlog0
BI Security Service: ‘Error Message From BI Security Service: oracle.bi.security.service.SecurityServiceException: SecurityService::checkSystemUserPermissionsSystem user has not been granted required permission oracle.bi.server.impersonateUser’

[2013-02-14T13:00:04.000-06:00] [OBIPS] [ERROR:31] [] [saw.security.odbcuserpopulationimpl.searchidentities] [ecid: ] [tid: ] Error retrieving user/group data from Oracle BI Server’s User Population API.
Unable to create a system user connection to BI Server while running user population queries
Odbc driver returned an error (SQLDriverConnectW).
State: HY000.  Code: 10058.
So I went ahead to creating this role in EM manually by comparing the Roles and Policies from Production.

Steps you need to follow in the case when BISystem Role accidently gets deleted:

1.Recreate the BISytem Role map this to BISystem User.

  •  Login to EM
  • Business Intelligence > coreapplicaiton > Right click > Application Roles
  • Select the Application Stripe as obi
  • Click Create
  • Role Name: BISystem
  • Display Name: BI System Role
  • Members: BISystem User

2.Recreate BISystem Policies add BISystem Role as member to this.

  • Business Intelligence > coreapplication>Right Click > Application Policies
  • Select the Application Stripe as obi
  • Click Create
  • Add the BISystem Role as the Grantee
  • In the Permissions section, add the following:
oracle.bi.scheduler.manageJobs Grants permission to use Job Manager to manage scheduled Delivers jobs.
oracle.bi.server.queryUserPopulation Internal use only.
oracle.bi.server.impersonateUsers Used by internal components that need to act on behalf of end users.
oracle.bi.server.manageRepositories Grants permission to open, view, and edit repository files using the Administration Tool or the Oracle BI Metadata Web Service.
EPM_Essbase_Administrator Grants permissions for EPM Essbase Administrator.

3.When creating Application policies for BISystem Roles if you dont find from the search results, then leave the Resource Name empty and click continue and manually add the Permission Class , Resource Name and Permissions Actions.

Example when you trying to add the oracle.bi.server.impersonateUser but could not find in search results leave the Resource name empty and click Continue

And enter these Permission Class, Resource Name and Permission Actions manually

There once you have added all the Policies and restart your services, OBIEE should be back up and you should be able to log back in again.

Reference:

http://docs.oracle.com/cd/E28271_01/bi.1111/e10543/install.htm

The post OBIEE Accidentally deleted BISystem Role appeared first on Oracle for All.

OBIEE 11g Basic System Administration

$
0
0

In a previous posting in this series, I looked at OBIEE 11gR1’s architecture at a high level, and yesterday following the official launch I took a look at the installation process. I briefly I touched on a few administration tasks such as starting and stopping the OBIEE components, but in this posting I want to look at this topic in more detail, looking at where all the various files have gone and how you perform basic administration on the system.

First up, as an OBIEE 10g administrator, your administration tasks were mostly performed either through the Administration tool, the web-based Presentation Server administration screen, or through editing files in the filesystem. There were something like 700 or so configuration options spread over multiple tools and configuration files, with some options (users and groups, for example) embedded in unrelated repositories (the RPD). OBIEE 11g addresses these by where possible moving administration and configuration into Fusion Middleware Control (also referred to as Enterprise Manager).

To start off with something familiar, the Administration tool that was present in OBIEE 10g is also present in 11g, is also Windows-based, and is used to maintain the semantic model used by the BI Server. Here’s a screenshot of the 11g version, showing the SampleApp and some of my own subject areas:

Sshot-1-6

This tool is more or less the same, and has some enhancements in terms of dimension handling, new data sources and the like. A big change though is around security; now when you bring up the Security Managerdialog, it looks like this:

Sshot-14-7

Users and Application Roles (roughly analogous to groups in 10g) are now defined in the WebLogic Server admin console, and you use the Security Manager to define additional links through to other LDAP servers, register custom authenticators, and set up filters and other constraints. In the above screenshot, the users shown in theUsers list are those that are held in WebLogic Server’s JPS (Java Platform Security) service, and there are no longer any users and groups in the RPD itself. Notice also that there is no Administrator user – instead the standard administrator user is the account that you set up as the WebLogic Server administrator when you installed OBIEE, which typically has the username weblogic. There are also two additional default users;OracleSystemUser is used by the various OBIEE web services to communicate with the BI Server, andBISystemUser is used by BI Publisher to connect to the BI Server as a data source (both default to the same password as the weblogic admin user you set up during the install).

If you switch to the Application Roles tab, you’ll also see a list of new default application roles; BISystem,BIAdministrator, BIAuthor and BIConsumer, which are used to grant access to Presentation Server functionality and also encompass the old XMLP_* groups that you used to get in 10g that were used to manage access to BI Publisher. There’s also AuthenticatedUser which is the same as found in the previous release. So how do you create a new user in OBIEE 11g? For that you’ll need to start up the web-based WebLogic Server admin console.

To create a new user, log on to the WebLogic Server admin console (http://localhost:7001/console) andbifoundation_domain > Security Realms from the Fusion Middleware Control menu. Then from the list of security realms, select myrealm, and then from the Settings for myrealm dialog select Users and Groups, and then Users, from the tab menu, You are then presented with a list of existing users.

Sshot-3-6

Pressing the New button brings up a dialog where you can enter the user’s details, and you can also use theGroups tab to define a group for the user, or assign the user to an existing group. Security is quite a bit of a big change in 11g and in addition, we have the Application Roles setting that you saw in the Security Manager screenshot, which you then map to the groups in WebLogic. I’ll cover security in a future posting, but for now, this is how to define basic users and groups.

Another area that’s changed significantly where configuration files and metadata files are stored. In OBIEE 10g, you had two top-level folders, $ORACLEBI and $ORACLEBIDATA. $ORACLEBI (typically installed, for example, in c:\oracle\oraclebi) would hold binaries and configuration files for the BI Server, plus other components such as BI Publisher and JavaHost. $ORACLEBIDATA (installed, typically at c:\oracle\oraclebidata) would hold binaries for the Presentation Server, config files for the Presentation Server, plus cache files and temporary files for the BI Server. In OBIEE 11gR1 the filesystem changes, with the diagram below showing the high-level filesystem layout for a Windows installation at c:\Middleware:

Sshot-12-13

So where are the key files that we are used to working with? Taking my installation on Microsoft Windows 2003 Server, and with OBIEE 11gR1 installed at C:\Middleware, here’s where my key files are located:

  • RPD Directory :C:\Middleware\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository
  • NQSConfig.INI : C:\Middleware\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1\nqsconfig.INI
  • NQClusterConfig.INI : C:\Middleware\instances\instance1\config\OracleBIApplication\coreapplication\NQClusterConfig.INI
  • nqquery.log :C:\Middleware\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1\nqquery.log
  • nqserver.log :C:\Middleware\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1\nqserver.log
  • nqsserver.exe : C:\Middleware\Oracle_BI1\bifoundation\server\bin\nqsserver.exe
  • Webcat Directory :C:\Middleware\instances\instance1\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalog\
  • instanceconfig.xml :C:\Middleware\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\instanceconfig.xml
  • xdo.cfg :C:\Middleware\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\xdo.cfg
  • sawlog0.log :C:\Middleware\instances\instance1\diagnostics\logs\OracleBIPresentationServicesComponent\coreapplication_obips1\sawlog0.log
  • sawserver.exe : C:\Middleware\Oracle_BI1\bifoundation\web\bin\sawserver.exe

Taking a look at tthe NQSConfig.INI file, whilst the format is the same, notice how many of the parameters are now marked as being managed by Enterprise Manager (Fusion Middleware Control):

Sshot-4-5

Now these are parameters that you’re supposed to change only through Fusion Middleware Control. You can change them manually, but they’ll get overwritten by the WebLogic Server admin server when you restart WebLogic. You can override this behaviour so that changes you do make to these particular parameters don’t get overwritten, but then you’ll have to remember to copy changes to all the nodes (in OBIEE 11g, clustering is automatically enabled). Not all parameters are managed in this way (in the screenshot above,DATA_STORAGE_PATHS, POPULATE_AGGREGATE_ROLLUP_HITS andUSE_ADVANCED_HIT_DETECTION still have to be changed by manually updating this file, but over time the plan is to move more and more parameters to management through Fusion Middleware Control.

To change the managed parameters, go to Fusion Middleware Control, log in as an administrator user (weblogic/welcome1 in my case), and click on the coreapplication node under the Business Intelligence menu entry, so that an overview of the system components status is shown:

Sshot-5-5

From this screen, you can stop, start and restart all of the system components (BI Server, Presentation Server etc) via OPMN. From this point, you can then click, on the Capacity Management, Diagnostics, Security or Deployment tabs to perform further maintenance.

  • Capacity Management has four further sub-tabs, and can show Metrics gathered via DMS; the Availabilityof all the individual system components (allowing you to stop, start and restart them individually); Scalability lets you dynamically increase the number of BI Servers, Presentation Servers, Cluster Controllers and Schedulers in the cluster in conjunction with the “scale out” install option, and Performance lets you turn caching on or off and modify other parameters associated with response time.
  • Diagnostics has two sub-tabs; Log Messages shows you a cluster-wide view of all server errors and warnings, and Log Configuration lets you limit the size of logs and what information gets included in them.

Sshot-6-5

  • Security is used for enabling SSO and selecting the SSO provider
  • Deployment has five sub-tabs; Presentation lets you set dashboard defaults around page tabs, section headings etc; Scheduler sets the connection details for the scheduler schema; Marketing is for configuring the Siebel Marketing Content Server connection; Mail is for setting up the mail server that’s used by Delivers for email alerts. The most interesting tab is Repository though, as this is where you upload new RPDs for use by the BI Server.

When you first navigate to this tab, the option to upload a new RPD is grayed-out. This is because you have to press the Lock and Edit Configuration button, which stops anyone else from attempting the same operation at the same time. The default installation of OBIEE 11gR1 comes with an RPD called SampleAppLite, and I want to replace this with my own RPD, developed offline previously.

Sshot-13-11

After pressing Lock and Edit Configuration, an “in progress” message comes up, and then you can start uploading your new RPD file. In the example below, I’ve used the Browse button to pick up a new RPD calledOBIEE11g_Examples.rpd, and I’ve entered the RPD password into the text boxes below (remember in 11g, the RPD itself has a password, rather than you giving the password of an RPD user with admin privileges as you did with 10g).

Sshot-8-5

Pressing Activate Changes will firstly bring up a message saying that the changes will be applied regardless of whether you close your browser window, and shortly afterwards, a second message is displayed saving that the action is completed successfully.

Sshot-9-5

Then if you check the NQSConfig.INI file, you should see your change written to the file. (Technically, the Activate Changes process actually writes the changes to an intermediate file, which the Admin Server then polls regularly and once it sees the changes, writes them to the NQSConfig.INI file).

Sshot-10-3

At this point though, as with OBIEE 10g, you still need to restart the BI Server for this change to take effect. To do this, click on the Restart to Apply Recent Changes link at the top of the web page, which takes you to theOverview page for the coreapplication system components in Fusion Middleware Control. From this point, you can either restart all components (which is a bit of overkill), or switch to the Capacity Management tab, then Availability sub-tab, and restart just the BI Server system component. Once you’ve done this, the new RPD will become active. Note also from the screenshot above that RPDs get automatically versioned, with each upload of a particular RPD being saved in the BI Server repository directory with a sequence number appended to it.

Sshot-11-4

Many administration tasks in 11g are the same as 10g. For example, the log level for a particular user is still defined in the security manager, and you still view the query log (nqquery.log) either through the filesystem, or through the Manage Sessions link in the Presentation Server administration screen. Usage tracking is still manually set up through the NQSConfig.ini file, though the schema it uses is automatically created at installation time through the RCU (Fusion Middleware Repository Creation Utility). In 11gR1, only a subset of these administration tasks are performed through Fusion Middleware Control, but as the releases stack up, more of these functions will move to this environment, something that’s more important now that clustering is turned on by default.

Finally, the Administration screen in the Presentation Server web interface has had a visual overhaul with the 11g release. Some functions, such as the one to reload server metadata in 10g, have moved from Answers into this screen, and new functions have been added to manage, for example, the mapping feature.

Sshot-12-4

Once you get beyond the main menu screen, the way the functions work hasn’t changed much in this release. Some of the dialogs have visually changed, but as you can see in the screenshot below, the functions work in much the same way as 10g, and you can see the Application Roles that were visible in the Security Managerat the start of this posting being used to grant access to Presentation Server functionality.

Sshot-13-3

So that’s it for basic administration. Take a look at our OBIEE 11gR1 Resource Centre for a complete listing of our 11g postings, and we’re also running a special, three-day Oracle BI 11g Training Days event in Atlanta, London and Bangalore later in the year if you’re after in-depth, hands-on training on this new release. For now though, I’m going to hand-off to Venkat for a series of postings on the new features in the 11g BI Server.

The post OBIEE 11g Basic System Administration appeared first on Oracle for All.

Oracle Apps 12.2.4 Installation Step-By-Step

$
0
0

Oracle E-Business Suite 12.2.4 Installation Step-by-Step Procedure:

Here I installed E-Business Suite 12.2.4 on Oracle Enterprise Linux 5.7

Once you installed Linux, please follow below steps to install E-Business Suite 12.2.4

1.  Download below zip files from edelivery.oracle.com

Oracle Apps 12_2_4

 

1Oracle Apps 12_2_4

 

2Oracle Apps 12_2_4

2.  Copy all of the above zip files to Linux Machine /u01/Stage directory:

Unzip below 3 zips

[root@apps Stage]# cd /u01/Stage

unzip V46243-01_1of3

unzip V46243-01_2of3

unzip V46243-01_3of3

3. Once you unzip the above zips, start creating the stage:

[root@apps Stage]# cd /u01/Stage/startCD/Disk1/rapidwiz/bin

[root@apps bin]# sh buildStage.sh
Copyright (c) 2002, 2013 Oracle Corporation
Redwood Shores, California, USA

Oracle E-Business Suite Rapid Install

Version 12.2.0
Press Enter to continue…
Build Stage Menu

——————————————————

1. Create new stage area

2. Copy patches to existing stage area

3. List files in TechPatches directory

4. Exit menu

Enter your choice [4]: 1

Rapid Install Platform Menu

——————————————————

1. Oracle Solaris SPARC (64-bit)

2. Linux x86 (64-bit)

3. IBM AIX on Power Systems (64-bit)

4. HP-UX Itanium

5. Exit Menu

Enter your choice [5]: 2
Running command:

/u01/Stage/startCD/Disk1/rapidwiz/bin/../jre/Linux_x64/1.6.0/bin/java -classpath /u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/emocmutl.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/ewt-3_4_22.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/share-1_1_18.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/jnls.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/ACC.JAR:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/netcfg.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/ojdbc14.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/OraInstaller.jar:/u01/Stage/startCD/Disk1/rapidwiz/bin/../jlib/java oracle.apps.ad.rapidwiz.util.StageBuilder /u01/Stage/startCD/Disk1/rapidwiz/bin Linux_x64 Linux_x64

Specify the directory containing the zipped installation media:
/u01/Stage/
File list:
/u01/Stage/startCD/Disk1/rapidwiz/bin/stageData/zipFiles.dat
The set of zip files is complete.
Unzip command is: /u01/Stage/startCD/Disk1/rapidwiz/unzip/Linux_x64/unzip -o
Unzipping V35230-01_1of2.zip
Unzipping V35230-01_2of2.zip
Unzipping V35231-01_1of5.zip
Unzipping V35231-01_2of5.zip
Unzipping V35231-01_3of5.zip
Unzipping V35231-01_4of5.zip
Unzipping V35231-01_5of5.zip
Unzipping V35802-01.zip
Unzipping V35803-01_1of3.zip
Unzipping V35803-01_2of3.zip
Unzipping V35803-01_3of3.zip
Unzipping V35804-01_1of2.zip
Unzipping V35804-01_2of2.zip
Unzipping V35805-01_1of2.zip
Unzipping V35805-01_2of2.zip
Unzipping V35806-01_1of3.zip
Unzipping V35806-01_2of3.zip
Unzipping V35806-01_3of3.zip
Unzipping V35807-01.zip
Unzipping V35808-01.zip
Unzipping V35809-01.zip
Unzipping V35810-01.zip
Unzipping V35811-01.zip
Unzipping V35812-01.zip
Unzipping V35813-01.zip
Unzipping V29764-01.zip
Unzipping V29856-01.zip
Unzip command is: /u01/Stage/startCD/Disk1/rapidwiz/unzip/Linux_x64/unzip -o
Applying one-off patches…
All files have been unzipped successfully.
Stage area is confirmed to be complete.
Command = cp /u01/Stage/V35813-01.zip /u01/Stage/startCD/Disk1/rapidwiz/bin/stageData/epdFiles/epdLinux_x64.zip

Finished unzipping shiphome.

Directory /u01/Stage/TechPatches

Unzipping Oracle Software Delivery Cloud one-off patches…
Command: /u01/Stage/startCD/Disk1/rapidwiz/bin/../unzip/Linux_x64/unzip -o /u01/Stage/startCD/Disk1/rapidwiz/bin/stageData/epdFiles/epdLinux_x64.zip -d /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/..
Press Enter to continue…
Archive: /u01/Stage/startCD/Disk1/rapidwiz/bin/stageData/epdFiles/epdLinux_x64.zip
extracting: nux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/14598522/p14598522_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/12949905/p12949905_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/6880880/p6880880_112000_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/13040331/p13040331_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/12955701/p12955701_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/14005749/p14005749_112030_Generic.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/14013094/p14013094_112030_Generic.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/11071989/p11071989_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/13388104/p13388104_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/13808632/p13808632_112030_Generic.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/14153501/p14153501_112030_Linux-x86-64.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/14832335/p14832335_112030_Generic.zip
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/../TechPatches/DB/11820674/p11820674_R12_LINUX.zip
Finished unzipping Oracle Software Delivery Cloud one-off patches.
Press Enter to continue…

Stage Builder will now stage the one-off patches for Linux_x64…

Press Enter to continue…

Copying latest one-off patches to stage area…

Running command:

/u01/Stage/startCD/Disk1/rapidwiz/bin/../unzip/Linux_x64/unzip -o /u01/Stage/startCD/Disk1/rapidwiz/bin/../Xpatches/Linux_x64.zip -d /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches

Press Enter to continue…
Archive: /u01/Stage/startCD/Disk1/rapidwiz/bin/../Xpatches/Linux_x64.zip
creating: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/MiddleTier/13947608/
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/MiddleTier/13947608/p13947608_111160_Generic.zip
creating: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/MiddleTier/17325559/
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/DB/17468141/p17468141_112030_Linux-x86-64.zip
creating: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/DB/17047617/
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/DB/17047617/p17047617_112030_Linux-x86-64.zip
creating: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/DB/15967134/
extracting: /u01/Stage/startCD/Disk1/rapidwiz/bin/../../../../TechPatches/DB/15967134/p15967134_112030_Linux-x86-64.zip

Finished copying additional patches.
Verifying stage area…

Directory /u01/Stage/TechInstallMedia is valid.
Directory /u01/Stage/TechPatches/DB is valid.
Directory /u01/Stage/TechPatches/MiddleTier is valid.
Directory /u01/Stage/EBSInstallMedia/AppDB is valid.
Directory /u01/Stage/EBSInstallMedia/Apps is valid.
Directory /u01/Stage/EBSInstallMedia/AS10.1.2 is valid.
Directory /u01/Stage/TechInstallMedia/database is valid.
Directory /u01/Stage/TechInstallMedia/ohs11116 is valid.
Directory /u01/Stage/TechInstallMedia/wls1036_generic is valid.
Stage area verified.

Press Enter to continue…
Build Stage Menu

——————————————————

1. Create new stage area

2. Copy patches to existing stage area

3. List files in TechPatches directory

4. Exit menu

Enter your choice [4]:

Stage Builder exiting…

 

4. Create directory structures

[root@apps /]# mkdir -p /u01/ebiz/ora

[root@apps /]# mkdir -p /u01/ebiz/apps

[root@apps /]# mkdir -p /u01/ebiz/oraInventory

[root@apps /]# chown -R oracle:dba /u01

[root@apps /]# chmod -R 755 /u01

 

5. Create oraInst.loc file and keep oraInventory location in that file.

[root@apps /]# vi /etc/oraInst.loc

inventory_loc=/u01/ebiz/oraInventory

[root@apps oraInventory]#

 

6. Install RPMs to get the installation successful:

[root@apps /]# rpm -ivh compat-libcwait-2.1-2.x86_64.rpm

[root@apps /]# rpm -ivh libaio-0.3.105-2.x86_64.rpm

[root@apps /]# rpm -ivh openmotif21-2.1.30-11.EL5.i386

 

7. Start the installation by running rapidwiz

Note: Rapidwiz will only install 12.2.0 version of E-Business suite, after that we need to upgrade it to 12.2.4

[root@apps /]# cd /u01/Stage/startCD/Disk1/rapidwiz/

[root@apps rapidwiz]# ./rapidwiz

Rapid Install Wizard is validating your file system……
CMDDIR=/u01/Stage/startCD/Disk1/rapidwiz
Rapid Install Wizard will now launch the Java Interface…..

E-Biz1

Click Next

E-Biz2

Click Next

E-Biz3

Click Next

E-Biz4

Click Next

E-Biz5

Click next

E-Biz6

Click Next

E-Biz7

I did chose Fresh Database.  Please chose Vision Demo Database if you require Vision Instance

E-Biz8

Click Next

E-Biz9

Click Next

E-Biz10

Click Next

E-Biz11

Click Next

E-Biz12

Click Next

E-Biz13

Click Next

E-Biz14

Click Next

E-Biz15

E-Biz16

Click Next

E-Biz17

Click Next

E-Biz18

Click Yes

E-Biz19

E-Biz21

E-Biz22

 

E-Biz23

Click Finish

E-Biz25

 

Oracle Applications Home Page

E-Biz24

 

Weblogic Home Page

version

UPGRADE PROCEDURE FROM 12.2.0 TO 12.2.4

Please follow below procedure to upgrade Oracle E-Business Suite from 12.2.0 to 12.2.4

UPGRADE PROCEDURE FROM 12.2.0 TO 12.2.4

You can follow Note ID:  1617458.1 to perform the upgrade.  Please download below patches and apply them:

Shutdown all services and apply these patches.

DATABASE PATCHES
================
16989137
17875948
17912217
18419770
18614015
18685209
19078951  Note:  Rollback Patch 18259911 before applying 19078951 patch
19393542


Forms and Reports 10.1.2.3 Patches and Bug Number
=======================================
18186693
18620223
19434967


Patches and bug numbers for Oracle Fusion Middleware (FMW) 11.1.1.6
====================================================

13055259
17555224
17639414

Patches and bug numbers for FMW oracle_common 11.1.1.6
===========================================
13490778
17284368
18989444
19462638

Oracle Weblogic Server 10.3.6.0 Patch and Bug Numbers
=========================================
17893334

Section 3: Apply Consolidated Seed Table Upgrade Patch (Required)
=================================================
Patch 17204589: Start only Admin server of fs1 and apply


Apply the Latest AD and TXK Delta Release Update Packs
==========================================
Start admin Server of fs1 and run adgrants as per 18283295 readme

Patch 18283295:R12.AD.C.Delta.5

Patch 19581770:R12.AD.C

Patch 18288881:R12.TXK.C.Delta.5

Patch 19445058:R12.TXK.C

Patch 19259764:R12.FND.C

Patch Code Level 17537119

adop phase=apply patches=18283295,19581770 hotpatch=yes merge=yes

Check Code Level by running patch 17537119

adop phase=apply patches=18288881,19445058 hotpatch=yes merge=yes

adop phase=apply patches=19259764 hotpatch=yes


RUP PATCH
=========
Stop admin and nodemanager of fs1

adop phase=apply apply_mode=downtime patches=17919161

Start all Application tier services on the run file system

adop phase=cleanup

adop phase=fs_clone

The post Oracle Apps 12.2.4 Installation Step-By-Step appeared first on Oracle for All.

Oracle Enterprise Linux 5.7 Installation

$
0
0

his article provides a pictorial guide for performing a default installation of Oracle Enterprise Linux 5.7

1. Boot from the CD or DVD. At the boot screen, press the “Enter” key.

Oracle Enterprise Linux 5.7 Installation

2. Press the “tab” key to move focus to the “Skip” key, then press the “Enter” key to continue.

Oracle Enterprise Linux 5.7 Installation

3. On the “Welcome” screen, click the “Next” button.

Oracle Enterprise Linux 5.7 Installation

4. Select the appropriate language, then click the “Next” button.

Oracle Enterprise Linux 5.7 Installation

5. Select the relevant keyboard setting, then click the “Next” button.

Oracle Enterprise Linux 5.7 Installation

6. Click the “Yes” button on the disk partitioning warning dialog and then allow the installer to automatically partition the disk by clicking on the “Next” button.

Oracle Enterprise Linux 5.7 Installation

7. Set the Hostname manually by select manually & give hostname as you want here I used “fa.fusionappsdba.com”, here we can give network interface details later after completion of installation, So click “Next”.

Oracle Enterprise Linux 5.7 Installation

8. Select the relevant region by clicking on the map.

Oracle Enterprise Linux 5.7 Installation

9. Enter a root password for the server, then click the “Next” button to proceed.

Oracle Enterprise Linux 5.7 Installation

10. Check all packages & Select the “Customize now” option and the appropriate installation type and click the “Next” button.

Oracle Enterprise Linux 5.7 Installation

11. The “Package Group Selection” screen allows you to select the required package groups, and individual packages within the details section. Selection the package “Development” and Click “Optional Packages” as shown below,

Oracle Enterprise Linux 5.7 Installation

12. Select the Highlighted Packages “llbstdc++44……..”  and “Imake-1.0.2-3……” as shown in the following screenshots…..

Oracle Enterprise Linux 5.7 Installation

Oracle Enterprise Linux 5.7 Installation

13. Select “Base System ” & Go to  “System Tools” , Click on “Optional packages”  & Select package “oracle-validated-1.1.0-……….” as shown below….

Oracle Enterprise Linux 5.7 Installation

15. On the “About to Install” screen, click the “Next” button.

Oracle Enterprise Linux 5.7 Installation

16. Wait Until the Installation Completes……………

Oracle Enterprise Linux 5.7 Installation

Oracle Enterprise Linux 5.7 Installation

17. Click the “Reboot” button to complete the installation.

Oracle Enterprise Linux 5.7 Installation

18. After completion of Reboot , On the “Welcome” screen, click the “Forward” button.

Oracle Enterprise Linux 5.7 Installation

19. Accept the license agreement and click the “Forward” button.

Oracle Enterprise Linux 5.7 Installation

20. On the Firewall screen, choose the “Disabled” option and click the “Forward” button and Click the “Yes” button on the subsequent warning screen.

Oracle Enterprise Linux 5.7 Installation

21. On the SELinux screen, choose the “Disabled” option and click the “Forward” button and Click the “Yes” button on the subsequent warning screen.

Oracle Enterprise Linux 5.7 Installation

22. Accept the default setting on the Kdump screen by clicking the “Forward” button.

Oracle Enterprise Linux 5.7 Installation

23. Adjust the Date and Time settings if necessary, and click the “Forward” Button.

Oracle Enterprise Linux 5.7 Installation

24. Create an additional system user if required, and click the “Next” button & If you chose not to define an additional system user, click the “Continue” button on the resulting warning dialog.

Oracle Enterprise Linux 5.7 Installation

25. On the sound card screen, click the “Forward” button.

Oracle Enterprise Linux 5.7 Installation

26. On the “Additional CDs” screen, click the “Finish” button.

Oracle Enterprise Linux 5.7 Installation

27. Once the system has rebooted, you are presented with the login screen,Once logged in, you are ready to use the desktop and update the ip & hostname addresses in “/etc/hosts” as shown below….

Oracle Enterprise Linux 5.7 Installation

28. go to “neat” & update in network adapters also as shown below….

Oracle Enterprise Linux 5.7 Installation

29. Use “Service Network restart ” command to save & get effected with Network Configuaration.

Oracle Enterprise Linux 5.7 Installation

30. This step applicable only who has installed on Oracle Virtual Box 

Go to Device & Select Install Guest Additons, there you will get one window with Guest Additions, Select “VBoxLinuxAdditions.run” to install Guest Additions to get rid of issues with Windows.

Install Guest Additions

Go to Devices and select Install Guest Additions

53

Open New Terminal and copy Guest Additions as shown below

54

Eject the Guest Additions CD 55

Mount the Linux Software as shown below

56

Go to Server directory in Linux Software and install the selected package as shown below57

Go to Guest Additions software location and install Guest Additions now as follows58

 

Reboot your guest operating system once.

Oracle Enterprise Linux Completed….

The post Oracle Enterprise Linux 5.7 Installation appeared first on Oracle for All.

OBIA 7.9.6.4 Full & Incremental Loading

$
0
0

OBIA 7.9.6.4

Full & Incremental  Loading

What is Full Load

  • Full load means loading data for the very first time onto the BI Apps Datawarehouse
  • Full load is also undertaken when data seems corrupt and needs to be cleaned again
  • Data extracted is of huge volume and ETL mappings might run for long duration, often a few days …
  • Data volume controlled by INITIAL EXTRACT DATE parameters
  • Some developers feel full load is the solution to just about any problem, which is incorrect.

What is Incremental Load

  • This is the load which customers generally run each day
  • Only the incremental changes in source, compared to previous load are captured and loaded
  • Data volume is low.
  • The Incremental Data Set or the Delta Data is determined mostly by LAST EXTRACT DATE parameter
  • Logic and Calculations
    • Delta Data = Current source data – Current Warehouse Data
    • Delta Data = Source record change/insert date > Last ETL Extract Date
    • Delta Data = New records + Existing Changed records
    • New records Inserted, Changed records updated ( in Warehouse)

SQL Changes for Incremental Load

Methodology in Informatica

Methodology in Informatica

DAC Metadata

DAC Metadata(Controlling Full/Incr)

ETL Running in Full Mode

DAC Refresh Date Target

DAC Refresh Date-Source

The post OBIA 7.9.6.4 Full & Incremental Loading appeared first on Oracle for All.


OBIA 7.9.6.4 ETL Patterns

$
0
0

OBIA 7.9.6.4

ETL Patterns

Typical Load Data Process(ETL)

SDE Map Patterns

SDE Map Patterns – BC Mapplet

SDE Map Patterns – SA Mapplet

SIL Map Patterns

SIL Map Patterns ETL_PROC_WID Mapplet

SIL Map Patterns SIL Mapplet(Fact)

SIL Map Patterns SIL Mapplet(Dimension)

Common ETL Columns

The post OBIA 7.9.6.4 ETL Patterns appeared first on Oracle for All.

OBIEE Variables Example

$
0
0

OBIEE variable types :

  • Repository Variables

o   Static Variables

o   Dynamic Variables

  • Session Variables

o   System

  • Secutity
  • Others

o   Non-System

To create any variable Click Manage > Variable

This opens variable Manager as shown below.

REPOSITORY VARIABLE(STATIC)

  • Initialized only when BI server is started
  • Is a hard coded value like string, number , etc
  • Value can only be changed by logging in to the rpd file

REPOSITORY VARIABLE(DYNAMIC)

  • Initialized when BI server is started
  • Assigned value dynamically based on the result of a query
  • Value is dependent of the sql provided in Initialization Block

Create a dynamic variable and provide a Name and Default Initializer.Create a new initialization block by clicking on the New button in the above screen.

Provide Intialization Block variable Name

Click on “Edit Data Source” button and provide the sql to be used for the variable, e.g.

select lower(sys_context(‘USERENV’,’SESSION_USER’))||

‘@’||

lower(sys_context(‘USERENV’,’DB_NAME’))

from dual;

Provide Connection pool name. Note, a separate connection pool should be created for initialization blocks to execute te sql used for fetching data for the variable.

Test the sql by clicking on test button.

SESSION VARIABLE(SYSTEM)

  • Initialized when a Analytics web user logs (creates a new session)
  • Initialization depends on a Initialization block, similar to Dynamic repository Variable
  • Only system reserved variables can be created and the following is the list . Ones in Blue are Security related session Variables.
Variable Description
USER Holds the value the user enters as his or her logon name. This variable is typically populated from the LDAP profile of the user.
PROXY Holds the name of the proxy user. A proxy user is a user that has been authorized to act for another user.
GROUP Contains the groups to which the user belongs. Exists only for compatibility with previous releases. Legacy groups are mapped to application roles automatically.
WEBGROUPS Specifies the Catalog groups (Presentation Services groups) to which the user belongs, if any. Note that the recommended practice is to use application roles rather than Catalog groups.
USERGUID Contains the global unique identifier (GUID) of the user, typically populated from the LDAP profile of the user.
ROLES Contains the application roles to which the user belongs.
ROLEGUIDS Contains the global unique identifiers (GUIDs) for the application roles to which the user belongs. GUIDs for application roles are the same as the application role names.
PERMISSIONS Contains the permissions held by the user, such as oracle.bi.server.impersonateUser or oracle.bi.server.manageRepository.
DISPLAYNAME Used for Oracle BI Presentation Services. It contains the name that is displayed to the user in the greeting in the Oracle BI Presentation Services user interface. It is also saved as the author field for catalog objects. This variable is typically populated from the LDAP profile of the user.
PORTALPATH Used for Oracle BI Presentation Services. It identifies the default dashboard the user sees when logging in (the user can override this preference after logged on).
LOGLEVEL The value of LOGLEVEL (a number between 0 and 5) determines the logging level that the Oracle BI Server uses for user queries.
This system session variable overrides a variable defined in the Users object in the Administration Tool. If the administrator user (defined upon install) has a Logging level defined as 4 and the session variable LOGLEVEL defined in the repository has a value of 0 (zero), the value of 0 applies.
REQUESTKEY Used for Oracle BI Presentation Services. Any users with the same nonblank request key share the same Oracle BI Presentation Services cache entries. This tells Oracle BI Presentation Services that these users have identical content filters and security in the Oracle BI Server. Sharing Oracle BI Presentation Services cache entries is a way to minimize unnecessary communication with the Oracle BI Server.
SKIN Determines certain elements of the look and feel of the Oracle BI Presentation Services user interface. The user can alter some elements of the user interface by picking a style when logged on to Oracle BI Presentation Services. The SKIN variable points to an Oracle BI Presentation Services folder that contains the nonalterable elements (for example, figures such as GIF files). Such directories begin with sk_. For example, if a folder were called sk_companyx, the SKIN variable would be set to companyx.
DESCRIPTION Contains a description of the user, typically populated from the LDAP profile of the user.
USERLOCALE Contains the locale of the user, typically populated from the LDAP profile of the user.
DISABLE_CACHE_HIT Used to enable or disable Oracle BI Server result cache hits. This variable has a possible value of 0 or 1.
DISABLE_CACHE_SEED Used to enable or disable Oracle BI Server result cache seeding. This variable has a possible value of 0 or 1.
DISABLE_SUBREQUEST_CACHE Used to enable or disable Oracle BI Server subrequest cache hits and seeding. This variable has a possible value of 0 or 1.
SELECT_PHYSICAL Identifies the query as a SELECT_PHYSICAL query..
DISABLE_PLAN_CACHE_HIT Used to enable or disable Oracle BI Server plan cache hits. This variable has a possible value of 0 or 1.
DISABLE_PLAN_CACHE_SEED Used to enable or disable Oracle BI Server plan cache seeding. This variable has a possible value of 0 or 1.
TIMEZONE Contains the time zone of the user, typically populated from the LDAP profile of the user.

SESSION VARIABLE(NON-SESSION)

  • Initialized when a Analytics web user logs (creates a new session)
  • Initialization depends on a Initialization block, similar to Dynamic repository Variable

ROW WISE INITIALIZATION OF VARIABLES

If a variable is marked for row wise initialization it means it returns an array of values. Below are the steps. E.g. if we want a variable to store last 10 years here is how we create it :

Create a variable and click on New  to create  new initialization block

Provide a name for the initialization block and click “Edit Data Source”

Provide a sql that returns multiple values. Set connection pool and Test the sql. Save this and exit the “Variable Manager”

Reopen the “Variable Manager” and open the initialization block. Next click on the “Edit Data Target”.

Select the variable and check “Row wise initialization” and say OK.

On the Initialization block page click Test  to check tat the Variable ARRAY  is initialized and returns values.

 Access method for Variable types.

The post OBIEE Variables Example appeared first on Oracle for All.

What is ETL

$
0
0

ETL (Extract Transform and Load)

This section of the project goes a step further in elaborating the Data Warehouse concept which was described in the previous section. Here, we will see how the tool INFORMATICA is used to extract data from Source(s), transform it and then load it into the Target. Data transformation is done to eliminate any erroneous or redundant data. This ensures that only the correct data is loaded into the Target (OLAP), which will be used for analysis / reporting.

Informatica PowerCenter architecture is used to achieve the extract, transform and load of data. PowerCenter provides an environment that allows you to load data into a centralized location, such as a datamart, data warehouse, or operational data store (ODS). You can extract data from multiple sources, transform the data according to business logic you build in the client application, and load the transformed data into file and relational targets. PowerCenter provides the following integrated components:

  • PowerCenter repository. The PowerCenter repository is at the center of the PowerCenter suite. You create a set of metadata tables within the repository database that the PowerCenter applications and tools access. The PowerCenter Client and Server access the repository to save and retrieve metadata.
  • PowerCenter Repository Server. The PowerCenter Repository Server manages connections to the  repository from client applications. It inserts, updates, and fetches objects from the repository database tables. It also maintains object consistency.
  • PowerCenter Client. Use the PowerCenter Client to manage users, define sources and targets, build mappings and mapplets with the transformation logic, and create workflows to run the mapping logic. The PowerCenter Client has the following client applications: Repository Manager, Repository Server Administration Console, Designer, Workflow Manager, and Workflow Monitor.
  • PowerCenter Server. The PowerCenter Server extracts the source data, performs the data transformation, and loads the transformed data into the targets.

Sources

PowerCenter accesses the following sources:

  • Relational. Oracle, Sybase, Informix, IBM DB2, Microsoft SQL Server, and Teradata.
  • File. Fixed and delimited flat file, COBOL file, and XML.
  • Application. You can purchase additional PowerCenter Connect products to access business sources, such as PeopleSoft, SAP R/3, Siebel, IBM MQSeries, and TIBCO.
  • Mainframe. You can purchase PowerExchange for faster access to IBM DB2 on MVS.
  • Other. Microsoft Excel and Access.

Targets

PowerCenter can load data into the following targets:

  • Relational. Oracle, Sybase, Sybase IQ, Informix, IBM DB2, Microsoft SQL Server, and Teradata.
  • File. Fixed and delimited flat file and XML.
  • Application. You can purchase additional PowerCenter Connect products to load data into SAP BW. You can also load data into IBM MQSeries message queues and TIBCO.
  • Other. Microsoft Access.

You can load data into targets using ODBC or native drivers, FTP, or external loaders.

Repository

The PowerCenter repository resides on a relational database. The repository database tables contain the instructions required to extract, transform, and load data. PowerCenter Client applications access the repository database tables through the Repository Server.

You add metadata to the repository tables when you perform tasks in the PowerCenter Client application, such as creating users, analyzing sources, developing mappings or mapplets, or creating workflows. The PowerCenter Server reads metadata created in the Client application when you run a workflow. The PowerCenter Server also creates metadata, such as start and finish times of a session or session status.

You can develop global and local repositories to share metadata:

  • Global repository. The global repository is the hub of the domain. Use the global repository to store common objects that multiple developers can use through shortcuts. These objects may include operational or Application source definitions, reusable transformations, mapplets, and mappings.
  • Local repositories. A local repository is within a domain that is not the global repository. Use local repositories for development. From a local repository, you can create shortcuts to objects in shared folders in the global repository. These objects typically include source definitions, common dimensions and lookups, and enterprise standard transformations. You can also create copies of objects in non-shared folders.
  • Version control. A versioned repository can store multiple copies, or versions, of an object. Each version is a separate object with unique properties. PowerCenter version control features allow you to efficiently develop, test, and deploy metadata into production.

You can connect to a repository, back up, delete, or restore repositories using pmrep, a command line program.

You can view much of the metadata in the Repository Manager. The Informatica Metadata Exchange (MX) provides a set of relational views that allow easy SQL access to the Informatica metadata repository.

Repository Server

The Repository Server manages repository connection requests from client applications. For each repository database registered with the Repository Server, it configures and manages a Repository Agent process. The Repository Server also monitors the status of running Repository Agents, and sends repository object notification messages to client applications.

The Repository Agent is a separate, multi-threaded process that retrieves, inserts, and updates metadata in the repository database tables. The Repository Agent ensures the consistency of metadata in the repository by employing object locking.

PowerCenter Client

The PowerCenter Client consists of the following applications that you use to manage the repository, design mappings, mapplets, and create sessions to load the data:

  • Repository Server Administration Console. Use the Repository Server Administration console to administer the Repository Servers and repositories.
  • Repository Manager. Use the Repository Manager to administer the metadata repository. You can create repository users and groups, assign privileges and permissions, and manage folders and locks.
  • Designer. Use the Designer to create mappings that contain transformation instructions for the PowerCenter Server. Before you can create mappings, you must add source and target definitions to the repository. The Designer has five tools that you use to analyze sources, design target schemas, and build source-to-target mappings:
    • Source Analyzer. Import or create source definitions.
    • Warehouse Designer. Import or create target definitions.
    • Transformation Developer. Develop reusable transformations to use in mappings.
    • Mapplet Designer. Create sets of transformations to use in mappings.
    • Mapping Designer. Create mappings that the PowerCenter Server uses to extract, transform, and load data.
  • Workflow Manager. Use the Workflow Manager to create, schedule, and run workflows. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. The PowerCenter Server runs workflow tasks according to the links connecting the tasks. You can run a task by placing it in a workflow.
  • Workflow Monitor. Use the Workflow Monitor to monitor scheduled and running workflows for each PowerCenter Server. You can choose a Gantt Chart or Task view. You can also access details about those workflow runs.

PowerCenter Server : 

The PowerCenter Server reads mapping and session information from the repository. It extracts data from the mapping sources and stores the data in memory while it applies the transformation rules that you configure in the mapping. The PowerCenter Server loads the transformed data into the mapping targets.

The PowerCenter Server can achieve high performance using symmetric multi-processing systems. The PowerCenter Server can start and run multiple workflows concurrently. It can also concurrently process partitions within a single session. When you create multiple partitions within a session, the PowerCenter Server creates multiple database connections to a single source and extracts a separate range of data for each connection, according to the properties you configure.

Connectivity

PowerCenter uses the following types of connectivity:

  • Network protocol
  • Native drivers
  • ODBC

The PowerCenter Client uses ODBC and native drivers to connect to source and target databases. It uses TCP/IP to connect to the Repository Server. The Repository Server uses native drivers to connect to the repository database. The Workflow Manager and the PowerCenter Server use TCP/IP to communicate with each other.

The PowerCenter Server uses native drivers to connect to the databases to move data. You can optionally use ODBC to connect the PowerCenter Server to the source and target databases. It uses TCP/IP to connect to the PowerCenter Client.

Database Connections

The Repository Server maintains a pool of reusable database connections for serving client applications. The server generates a Repository Agent process for each database. The Repository Agent creates new database connections only if all the current connections are in use.

For example, if 10 clients send requests to the Repository Agent one at a time, the agent requires only one connection. It reuses the same database connection for all the requests. If the 10 clients send requests simultaneously, the Repository Agent opens 10 connections. You can set the maximum number of open connections using the DatabasePoolSize parameter in the repository configuration file.

For a session, a reader object holds the connection for as long as it needs to read the data from the source tables. A writer object holds a connection for as long as it needs to write data to the target tables.

The PowerCenter Server maintains a database connection pool for stored procedure or lookup databases in a workflow. You can optionally set the MaxLookupSPDBConnections parameter to limit connections when you configure the PowerCenter service. The PowerCenter Server allows an unlimited number of connections to lookup or stored procedure databases. If a database user does not have permission for the number of connections a session requires, the session fails.

For pre-session, post-session, and load stored procedures, consecutive stored procedures reuse a connection if they have identical connection attributes. Otherwise, the connection for one stored procedure closes and a new connection begins for the next stored procedure.

PowerCenter Metadata Reporter

You can use PowerCenter Metadata Reporter, a web-based application, to run prepackaged dashboards and reports against PowerCenter repository metadata. These reports help give you insight into your repository, which enhances your ability to analyze and manage your repository efficiently.

You can run PowerCenter Metadata Reporter from a browser on any workstation, even a workstation that does not have PowerCenter tools installed.

Repository Server Administration Console:

Use the Repository Server Administration Console to administer Repository Servers and repositories. A Repository Server can manage multiple repositories. You use the Repository Server Administration Console to create and administer the repository through the Repository Server.

You can use the Administration Console to perform the following tasks:

  • Add, edit, and remove repository configurations.
  • Export and import repository configurations.
  • Create a repository.
  • Promote a local repository to a global repository.
  • Copy a repository.
  • Delete a repository from the database.
  • Back up and restore a repository.
  • Start, stop, enable, and disable repositories.
  • Send repository notification messages.
  • Register and unregister a repository.
  • Propagate domain connection information for a repository.
  • View repository connections and locks.
  • Close repository connections.
  • Register and remove repository plug-ins.
  • Upgrade a repository.

Administration Console Windows : 

The Administration Console can display the following windows:

  • Console Tree. Repository Servers and managed repositories. The Administration Console displays a       different set of Action menu items depending on which node you select in the Console Tree. You can also right-click a node to access the Action menu items.

The Console Tree contains the following nodes:

    • PowerCenter Repository Servers
    • Repository Server name
    • Repositories
    • Repository name
    • Connections
    • Locks
    • Activity Log
    • Backups
    • Packages
  • Main. The Main window displays details of the node you select in the Console Tree. For example, if you select a repository in the Console Tree, the Main window displays the properties of the repository, such as the status and start time.

The Main window displays results in the following views:

    • List view. Displays a collection of items that includes an icon and a label.
    • HTML view. Displays repository information as a dynamic HTML page. The Administration Console only displays repositories in HTML view.

Repository Manager :

Use the Repository Manager to administer your repositories. The Repository Manager allows you to navigate through multiple folders and repositories, and perform the following tasks:

  • Manage the repository. You can perform repository management functions, such as copying, creating, starting, and shutting down repositories. You launch the Repository Server Administration Console to perform these functions.
  • Implement repository security. You can create, edit, and delete repository users and user groups. You can assign and revoke repository privileges and folder permissions.
  • Perform folder functions. You can create, edit, copy, and delete folders. Work you perform in the Designer and Workflow Manager is stored in folders. If you want to share metadata, you can configure a folder to be shared.
  • View metadata. You can analyze sources, targets, mappings, and shortcut dependencies, search by keyword, and view the properties of repository objects.

Repository Manager Windows :

The Repository Manager can display the following windows:

  • Navigator. Displays all objects that you create in the Repository Manager, the Designer, and the Workflow Manager. It is organized first by repository, then by folder and folder version. Viewable objects include sources, targets, dimensions, cubes, mappings, mapplets, transformations, sessions, and workflows. You can also view folder versions and business components.
  • Main. Provides properties of the object selected in the Navigator window. The columns in this window change depending on the object selected in the Navigator window.
  • Dependency. Shows dependencies on sources, targets, mappings, and shortcuts for objects selected  in either the Navigator or Main window.
  • Output. Provides the output of tasks executed within the Repository Manager, such as creating a repository.

Repository Objects : 

You create repository objects using the Repository Manager, Designer, and Workflow Manager client tools. You can view the following objects in the Navigator window of the Repository Manager:

  • Source definitions. Definitions of database objects (tables, views, synonyms) or files that provide source data.
  • Target definitions. Definitions of database objects or files that contain the target data.
  • Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.
  • Mappings. A set of source and target definitions along with transformations containing business logic that you build into the transformation. These are the instructions that the PowerCenter Server uses to transform and move data.
  • Reusable transformations. Transformations that you can use in multiple mappings.
  • Mapplets. A set of transformations that you can use in multiple mappings.
  • Sessions and workflows. Sessions and workflows store information about how and when the PowerCenter Server moves data. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. A session is a type of task that you can put in a workflow. Each session corresponds to a single mapping.

WORKFLOW MONITOR:

The goal of the design process is to create mappings that depict the flow of data between sources and targets, including changes made to the data before it reaches the targets. However, before you can create a mapping, you must first create or import source and target definitions. You might also want to create reusable objects, such as reusable transformations or mapplets.

Perform the following design tasks in the Designer:

  1. Import source definitions.Use the Source Analyzer to connect to the sources and import the source definitions.
  2. Create or import target definitions.Use the Warehouse Designer to define relational, flat file, or XML targets to receive data from sources. You can import target definitions from a relational database or a flat file, or you can manually create a target definition.
  3. Create the target tables.If you add a target definition to the repository that does not exist in a relational database, you need to create target tables in your target database. You do this by generating and executing the necessary SQL code within the Warehouse Designer.
  4. Design mappings.Once you have source and target definitions in the repository, you can create mappings in the Mapping Designer. A mapping is a set of source and target definitions linked by transformation objects that define the rules for data transformation. A transformation is an object that performs a specific function in a mapping, such as looking up data or performing aggregation.
  5. Create mapping objects.Optionally, you can create reusable objects for use in multiple mappings. Use the Transformation Developer to create reusable transformations. Use the Mapplet Designer to create mapplets. A mapplet is a set of transformations that may contain sources and transformations.
  6. Debug mappings.Use the Mapping Designer to debug a valid mapping to gain troubleshooting information about data and error conditions.
  7. Import and export repository objects.You can import and export repository objects, such as sources, targets, transformations, mapplets, and mappings to archive or share metadata.

Designer Windows : 

You can display the following windows in the Designer:

  • Navigator. Connect to repositories, and open folders within the Navigator. You can also copy objects and create shortcuts within the Navigator.
  • Workspace. Open different tools in this window to create and edit repository objects, such as sources, targets, mapplets, transformations, and mappings.
  • Output. View details about tasks you perform, such as saving your work or validating a mapping.
  • Status bar. Displays the status of the operation you perform.
  • Overview. An optional window to simplify viewing a workspace that contains a large mapping or multiple objects. Outlines the visible area in the workspace and highlights selected objects in color.
  • Instance data. View transformation data while you run the Debugger to debug a mapping.
  • Target data. View target data while you run the Debugger to debug a mapping.

Loading Data:

In the Workflow Manager, you define a set of instructions to execute tasks, such as sessions, emails, and shell commands. This set of instructions is called a workflow.

After you create a workflow in the Workflow Designer, the next step is to add tasks to the workflow. The Workflow Manager includes tasks, such as the Session task, the Command task, and the Email task so you can design your workflow. The Session task is based on a mapping you build in the Designer.

You then connect tasks with links to specify the order of execution for the tasks you created. Use conditional links and workflow variables to create branches in the workflow.

When the workflow start time arrives, the PowerCenter Server retrieves the metadata from the repository to execute the tasks in the workflow.

You can monitor the workflow status in the Workflow Monitor.

Workflow Manager : 

The Workflow Manager consists of three tools to help you develop a workflow:

  • Task Developer. Create tasks you want to accomplish in the workflow in the Task Developer.
  • Workflow Designer. Create a workflow by connecting tasks with links in the Workflow Designer. You can also create tasks in the Workflow Designer as you develop the workflow.
  • Worklet Designer. Create a worklet in the Worklet Designer. A worklet is an object that groups a set of tasks. A worklet is similar to a workflow, but without scheduling information. You can nest multiple worklets inside a workflow.

Before you create a workflow, you must configure the following connection information:

  • PowerCenter Server connection. Register the PowerCenter Server with the repository before you can start it or create a session to run against it.
  • Database connections. Create connections to source and target systems.
  • Other connections. If you want to use external loaders or FTP, you configure these connections in the Workflow Manager.

Workflow Manager Windows

The Workflow Manager displays the following windows to help you create and organize workflows:

  • Navigator. Allows you to connect to and work in multiple repositories and folders.
  • Workspace. Allows you to create, edit, and view tasks, workflows, and worklets.
  • Output. Displays messages from the PowerCenter Server and the Repository Server. The Output window also displays messages when you save or validate tasks and workflows.
  • Overview. An optional window that makes it easier to view workbooks containing large workflows. Outlines the visible area in the workspace and highlights selected objects in color. Choose View-Overview Window to display this window.

Workflow Monitor :

After you create a workflow, you run the workflow in the Workflow Manager and monitor it in the Workflow Monitor. The Workflow Monitor is a tool that displays details about workflow runs in two views, Gantt Chart view and Task view. You can monitor workflows in online and offline modes.

The Workflow Monitor consists of the following windows:

  • Navigator window. Displays monitored repositories, servers, and repositories objects.
  • Output window. Displays messages from the PowerCenter Server.
  • Time window. Displays progress of workflow runs.
  • Gantt Chart view. Displays details about workflow runs in chronological format.
  • Task view. Displays details about workflow runs in a report format.

                      Transformations Overview

A transformation is a repository object that generates, modifies, or passes data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data.

Transformations in a mapping represent the operations the Integration Service performs on the data. Data passes through transformation ports that you link in a mapping or mapplet.

Transformations can be active or passive. Transformations can be connected to the data flow, or they can be unconnected.

Active Transformations :

An active transformation can perform any of the following actions:

Change the number of rows that pass through the transformation. For example, the Filter transformation is active because it removes rows that do not meet the filter condition. All multi-group transformations are active because they might change the number of rows that pass through the transformation.
Change the transaction boundary. For example, the Transaction Control transformation is active because it defines a commit or roll back transaction based on an expression evaluated for each row.
Change the row type. For example, the Update Strategy transformation is active because it flags rows for insert, delete, update, or reject.

The Designer does not allow you to connect multiple active transformations or an active and a passive transformation to the same downstream transformation or transformation input group because the Integration Service may not be able to concatenate the rows passed by active transformations. For example, one branch in a mapping contains an Update Strategy transformation that flags a row for delete. Another branch contains an Update Strategy transformation that flags a row for insert. If you connect these transformations to a single transformation input group, the Integration Service cannot combine the delete and insert operations for the row.

The Sequence Generator transformation is an exception to the rule listed above. The Designer does allow you to connect a Sequence Generator transformation and an active transformation to the same downstream transformation or transformation input group. A Sequence Generator transformation does not receive data. It generates unique numeric values. As a result, the Integration Service does not encounter problems concatenating rows passed by a Sequence Generator transformation and an active transformation.

The following figure shows how you can connect an active transformation and a passive Sequence Generator transformation to the same downstream transformation input group:

Passive Transformations :

A passive transformation does not change the number of rows that pass through the transformation, maintains the transaction boundary, and maintains the row type.

The Designer allows you to connect multiple transformations to the same downstream transformation or transformation input group only if all transformations in the upstream branches are passive. The transformation that originates the branch can be active or passive.

The following figure shows how you can connect passive transformations to the same downstream transformation input group:

Unconnected Transformations :

Transformations can be connected to the data flow, or they can be unconnected. An unconnected transformation is not connected to other transformations in the mapping. An unconnected transformation is called within another transformation, and returns a value to that transformation.

Transformation Descriptions :

The following table provides a brief description of each transformation:

Transformation Type Description
Aggregator Active/

Connected

Performs aggregate calculations.
Application Source Qualifier Active/

Connected

Represents the rows that the Integration Service reads from an application, such as an ERP source, when it runs a session.
Custom Active or Passive/

Connected

Calls a procedure in a shared library or DLL.
Data Masking Passive

Connected

Replaces sensitive production data with realistic test data for non-production environments.
Expression Passive/

Connected

Calculates a value.
External Procedure Passive/

Connected or Unconnected

Calls a procedure in a shared library or in the COM layer of Windows.
Filter Active/

Connected

Filters data.
HTTP Passive/

Connected

Connects to an HTTP server to read or update data.
Input Passive/

Connected

Defines mapplet input rows. Available in the Mapplet Designer.
Java Active or Passive/

Connected

Executes user logic coded in Java. The byte code for the user logic is stored in the repository.
Joiner Active/

Connected

Joins data from different databases or flat file systems.
Lookup Passive/

Connected or Unconnected

Looks up values.
Normalizer Active/

Connected

Source qualifier for COBOL sources. Can also use in the pipeline to normalize data from relational or flat file sources.
Output Passive/

Connected

Defines mapplet output rows. Available in the Mapplet Designer.
Rank Active/

Connected

Limits records to a top or bottom range.
Router Active/

Connected

Routes data into multiple transformations based on group conditions.
Sequence Generator Passive/

Connected

Generates primary keys.
Sorter Active/

Connected

Sorts data based on a sort key.
Source Qualifier Active/

Connected

Represents the rows that the Integration Service reads from a relational or flat file source when it runs a session.
SQL Active or Passive/

Connected

Executes SQL queries against a database.
Stored Procedure Passive/

Connected or Unconnected

Calls a stored procedure.
Transaction Control Active/

Connected

Defines commit and rollback transactions.
Union Active/

Connected

Merges data from different databases or flat file systems.
Unstructured Data Active or Passive/

Connected

Transforms data in unstructured and semi-structured formats.
Update Strategy Active/

Connected

Determines whether to insert, delete, update, or reject rows.
XML Generator Active/

Connected

Reads data from one or more input ports and outputs XML through a single output port.
XML Parser Active/

Connected

Reads XML from one input port and outputs data to one or more output ports.
XML Source Qualifier Active/

Connected

Represents the rows that the Integration Service reads from an XML source when it runs a session.

When you build a mapping, you add transformations and configure them to handle data according to a business purpose. Complete the following tasks to incorporate a transformation into a mapping:

1. Create the transformation. Create it in the Mapping Designer as part of a mapping, in the Mapplet Designer as part of a mapplet, or in the Transformation Developer as a reusable transformation.
2. Configure the transformation. Each type of transformation has a unique set of options that you can configure.
3. Link the transformation to other transformations and target definitions. Drag one port to another to link them in the mapping or mapplet.

The post What is ETL appeared first on Oracle for All.

Data Warehouse Question Answers

$
0
0

Data Warehouse:

A Data warehouse is a database concept which maintains current data and historical data for reporting and analysis.

Analysis:

Report to report comparison is called as analysis. Analysis leads to planning.

If the plans are executed properly the company gets good profits.

Warehouse Rules (For Master data):

Dimensional modelling is the database theory which maintains current data and historical data using the following 6 rules.

MASTER DATA: uniquely identified data in any database is called as master data. Any table which maintains repeated data is called as TRANSACTION DATA. Master data is also called as possible changing data. A few properties of master data can be changed as the time changes. These changes can be captured in dimensional modelling theory.

RULE#1: Source primary key should not be primary key in warehouse.

RULE#2: Warehouse should be added with separate primary key as SID (Surrogate ID).

Surrogate key is not the actual key, it is the key used as instead of actual source primary key.

RULE#3: Warehouse should be added with a ‘FLAG’ column to indicate data is current or historical.

RULE#4: The dimensional table should be added with ‘VERSION’ column for versioning.

RULE#5: Target table should be added with ‘START_DATE’ column to indicate when data is loaded.

RULE#6: <span “times=”” lang=”EN-GB” mso-ansi-language:en-gb’=”” new=”” roman”;color:black;=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>Target table should be added with ‘END_DATE’ column to indicate when data is modified.

About  ROLAP, MOLAP, DOLAP,OLTP,DWH..  :

1 .What is ROLAP, MOLAP, and DOLAP…?

         ROLAP <span “times=”” new=”” roman”;color:black’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>(Relational OLAP), MOLAP (Multidimensional OLAP), and DOLAP (Desktop OLAP). In these three OLAP architectures, the interface to the analytic layer is typically the same; what is quite different is how the data is physically stored.

         In MOLAP, <span “times=”” new=”” roman”;color:black’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>the premise is that online analytical processing is best implemented by storing the data multidimensionally; that is, data must be stored multidimensionally in order to be viewed in a multidimensional manner.

         In ROLAP, <span “times=”” new=”” roman”;color:black’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>architects believe to store the data in the relational model; for instance, OLAP capabilities are best provided against the relational database.

         DOLAP, <span “times=”” new=”” roman”;color:black’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>is a variation that exists to provide portability for the OLAP user. It creates multidimensional datasets that can be transferred from server to desktop, requiring only the DOLAP software to exist on the target system. This provides significant advantages to portable computer users, such as salespeople who are frequently on the road and do not have direct access to their office server.

2 .What is an MDDB? and What is the difference between MDDBs and RDBMSs?

         Multidimensional Database<span “times=”” new=”” roman”;color:black’=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”> There are two primary technologies that are used for storing the data used in OLAP applications.

These two technologies are multidimensional databases (MDDB) and relational databases (RDBMS). The major difference

between MDDBs and RDBMSs is in how they store data. Relational databases store their data in a series of tables and

columns. Multidimensional databases, on the other hand, store their data in a large multidimensional arrays.

For example, in an MDDB world, you might refer to a sales figure as Sales with Date, Product, and Location coordinates of

12-1-2001, Car, and south, respectively.

        

         Advantages of MDDB:

         Retrieval is very fast because

  • The data corresponding to any combination of dimension members can be retrieved with a single I/O.
  • Data is clustered compactly in a multidimensional array.
  • Values are caluculated ahead of time.
  • The index is small and can therefore usually reside completely in memory.

         Storage is very efficient because

  • The blocks contain only data.
  • A single index locates the block corresponding to a combination of sparse dimension numbers.
  1. What is Mapplet and how do u create Mapplet?

       <span “times=”” background:white’=”” new=”” roman”;color:black;=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>A mapplet is a reusable object that represents a set of transformations. It allows you to reuse transformation logic and can contain as many transformations as you need. Create a mapplet when you want to use a standardized set of transformation logic in several mappings. For example, if you have a several fact tables that require a series of dimension keys, you can create a mapplet containing a series of Lookup transformations to find each dimension key. You can then use the mapplet in each fact table mapping, rather than recreate the same lookup logic in each mapping.

         To create a new mapplet:

  1. In the Mapplet Designer, choose Mapplets-Create Mapplet.
  2. Enter a descriptive mapplet name.

The recommended naming convention for mapplets is mpltMappletName.

  1. Click OK.

The Mapping Designer creates a new mapplet in the Mapplet Designer.

  1. Choose Repository-Save.

4 . What is the difference between OLTP & OLAP?

  OLTP <span “times=”” background:white’=”” new=”” roman”;color:black;=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”>stand for Online Transaction Processing. This is standard, normalized database structure. OLTP is designed for Transactions, which means that inserts, updates, and deletes must be fast. Imagine a call center that takes orders. Call takers are continually taking calls and entering orders that may contain numerous items. Each order and each item must be inserted into a database. Since the performance of database is critical, we want to maximize the speed of inserts (and updates and deletes). To maximize performance, we typically try to hold as few records in the database as possible.

OLAP stands for Online Analytical Processing. OLAP is a term that means many things to many people. Here, we will use the term OLAP and Star Schema pretty much interchangeably. We will assume that star schema database is an OLAP system.( This is not the same thing that Microsoft calls OLAP; they extend OLAP to mean the cube structures built using their product, OLAP Services). Here, we will assume that any system of read-only, historical, aggregated data is an OLAP system.

OLTP system is basically application orientation (eg, purchase order it is functionality of an application)

Where as in DWH concern is subject orient (subject in the sense custorer, product, item, time)

 

OLTP

  • Application Oriented
  • Used to run business
  • Detailed data
  • Current up to date
  • Isolated Data
  • Repetitive access
  • Clerical User
  • Performance Sensitive
  • Few Records accessed at a time (tens)
  • Read/Update Access
  • No data redundancy
  • Database Size 100MB-100 GB

DWH

  • Subject Oriented
  • Used to analyze business
  • Summarized and refined
  • Snapshot data
  • Integrated Data
  • Ad-hoc access
  • Knowledge User
  • Performance relaxed
  • Large volumes accessed at a time(millions)
  • Mostly Read (Batch Update)
  • Redundancy present
  • Database Size 100 GB – few terabytes

A data warehouse(or mart) is way of storing data for later retrieval. This retrieval is almost always used to support decision-making in the organization. That is why many data warehouses are considered to be DSS (Decision-Support Systems).

Both a data warehouse and a data mart are storage mechanisms for read-only, historical, aggregated data.

By read-only, we mean that the person looking at the data won’t be changing it. If a user wants at the sales yesterday for a certain product, they should not have the ability to change that number.

The “historical” part may just be a few minutes old, but usually it is at least a day old.A data warehouse usually holds data that goes back a certain period in time, such as five years. In contrast, standard OLTP systems usually only hold data as long as it is “current” or active. An order table, for example, may move orders to an archive table once they have been completed, shipped, and received by the customer.When we say that data warehouses and data marts hold aggregated data, we need to stress that there are many levels of aggregation in a typical data warehouse.

               

5 . If data source is in the form of Excel Spread sheet then how do use?

       <span “times=”” background:white’=”” new=”” roman”;color:black;=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”> PowerMart and PowerCenter treat a Microsoft Excel source as a relational database,  not a  flat file. Like relational sources,

the Designer uses ODBC to import a Microsoft Excel  source. You do not need database permissions to import Microsoft

Excel sources.

To import an Excel source definition, you need to complete the following tasks:

  • Install the Microsoft Excel ODBC driver on your system.
  • Create a Microsoft Excel ODBC data source for each source file in the ODBC 32-bit Administrator.
  • Prepare Microsoft Excel spreadsheets by defining ranges and formatting columns of numeric data.
  • Import the source definitions in the Designer.

Once you define ranges and format cells, you can import the ranges in the Designer. Ranges display as source definitions

when you import the source.

5 . Which db is RDBMS and which is MDDB can u name them?

      <span “times=”” background:white’=”” new=”” roman”;color:black;=”” roman”;mso-bidi-font-family:”times=”” roman'”,”serif”;mso-fareast-font-family:=”” times=”” style=”font-family: ‘; font-size: 12.0pt;’;”> MDDB ex. Oracle Express Server(OES), Essbase by Hyperion Software, Powerplay by Cognos and

RDBMS ex. Oracle , SQL Server …etc.

  1. What is difference between view and materialized view?

Views contains query whenever execute views it has read from base table

Where as M views loading or replicated takes place only once which gives you better query performance

Refresh m views 1.on commit and 2. on demand

(Complete, never, fast, force)

  1. What is bitmap index why it’s used for DWH?

a bitmap for each key value replaces a list of rowids. Bitmap index more efficient for data warehousing because low cardinality, low updates, very efficient for where class

  1. What is star schema? And what is snowflake schema?

The center of the star consists of a large fact table and the points of the star are the dimension tables.

Snowflake schemas normalized dimension tables to eliminate redundancy. That is, the

Dimension data has been grouped into multiple tables instead of one large table.

Star schema contains demoralized dimension tables and fact table, each primary key values in dimension table associated with foreign key of fact tables.

Here a fact table contains all business measures (normally numeric data) and foreign key values, and dimension tables has details about the subject area.

Snowflake schema basically a normalized dimension tables to reduce redundancy in the dimension tables

9 .Why need staging area database for DWH?

Staging area needs to clean operational data before loading into data warehouse.

Cleaning in the sense your merging data which comes from different source

  1. What are the steps to create a database in manually?

create os service and create init file and start data base no mount stage then give create data base command.

11 .Why need data warehouse?

A single, complete and consistent store of data obtained from a variety of different sources made available to end users in a what they can understand and use in a business context.

A process of transforming data into information and making it available to users in a timely enough manner to make a difference Information

Technique for assembling and managing data from various sources for the purpose of answering business questions. Thus making decisions that were not previous possible

  1. What is difference between data mart and data warehouse?

A data mart designed for a particular line of business, such as sales, marketing, or finance.

Where as data warehouse is enterprise-wide/organizational

The data flow of data warehouse depending on the approach

13 What is the significance of surrogate key?

Surrogate key used in slowly changing dimension table to track old and new values and it’s derived from primary key.

14 .What is slowly changing dimension. What kind of scd used in your project?

Dimension attribute values may change constantly over the time. (Say for example customer dimension has customer_id,name, and address) customer address may change over time.

How will you handle this situation?

There are 3 types, one is we can overwrite the existing record, second one is create additional new record at the time of change with the new attribute values.

Third one is create new field to keep new values in the original dimension table.

15 .What is difference between primary key and unique key constraints?

Primary key maintains uniqueness and not null values

Where as unique constrains maintain unique values and null values

16 .What are the types of index? And is the type of index used in your project?

Bitmap index, B-tree index, Function based index, reverse key and composite index.

We used Bitmap index in our project for better performance.

  1. A table have 3 partitions but I want to update in 3rd partitions how will you do?

Specify partition name in the update statement. Say for example

Update employee partition(name) a, set a.empno=10 where ename=’Ashok’

18 .Write a query to find out 5th max salary? In Oracle, DB2, SQL Server

Select (list the columns you want) from (select salary from employee order by salary)

Where rownum<5

19 .When you give an update statement how undo/rollback segment will work/what are the steps?

Oracle keep old values in undo segment and new values in redo entries. When you say rollback it replace old values from undo segment. When you say commit erase the undo segment values and keep new vales in permanent

20.When you give an update statement how memory flow will happen and how oracles allocate memory for that?

Oracle first checks in Shared sql area whether same Sql statement is available if it is there it uses. Otherwise allocate memory in shared sql area and then create run time memory in Private sql area to create parse tree and execution plan. Once it completed stored in the shared sql area wherein previously allocated memory

The post Data Warehouse Question Answers appeared first on Oracle for All.

Oracle BI Apps Troubleshooting DAC and Informatica

$
0
0

Oracle BI Apps – Troubleshooting – DAC and Informatica Logs – Part 1

There is a large variety of logging information available to us that can  assist in troubleshooting Oracle BI Applications Data Warehouse behavior.

This post aims to:
a) Summarize the DAC and Informatica logs
b) Typical location
c) How to increase logging detail
d) Logs contents.

Note: Logging can impact performance, and use up large amounts of disk space. The recommendation is to reduce logging in Production, unless you are diagnosing unexpected behavior.

i) DAC Client

For debugging purpose, enable the dac logs .
DAC > Setup View > Output Redirect.
If set to TRUE, logging information and standard output and errors are redirected to files in the DAC\log directory (when property is set to True). The file containing standard output starts with out_ and ends with the .log extension. The standard error messages are in the file starting with err_ and ending with the .log extension.

 

ii) DAC Server Execution Logs

DAC console functions create an appropriately named log in the <DAC_HOME>\DAC\log directory. For example import.log, createwtables.log

The level of trace detail is set in DAC > Setup View > Server Log Level.

DAC Server Trace Level (**Case sensitive)

Content

SEVERE Default. Minimal trace detail
WARNING
INFO Start at this level for troubleshooting
CONFIG
FINE
FINER
FINEST Most extensive trace detail

Note: The DAC logger will mark each line with the trace level that generated it. A line in the log marked SEVERE, may not necessarily indicate a severe problem was encountered.

 

iii) DAC Server Trace Logs
DAC > Setup View > SQL Trace.

A value of TRUE sends a hint to the database connectivity layer of the DAC server to enable SQL tracing; thus, every SQL statement that is run by the DAC server is spooled to the appropriate output log file.

Caution: High output volume, do not set unless specifically required.

 



Note: The DAC logger will mark each line with the trace level that generated it. A line in the log marked SEVERE, may not necessarily indicate a severe problem was encountered.

iv) ETL Logs

–>DAC

DAC Log Name

Contents

Default Location

etl_summary.txt

Summary of the ETL Execution Plan, includes one line per task, shows status, timestamp and error code

<OBIEEroot>\DAC\logs

<Execution Plan Name>.x.loge.gComplete_ETL.5.logNote: Name contains a unique sequence number, and so a record is kept of historical logs Summary of ETL execution plan. Includes each task, and also “pmcmd” workflow command syntax executed, and workflow messages <OBIEEroot>\DAC\logs
<DAC Task Name>.log Task log showing Informatica PMCMD trace, Informatica workflow log name, connection information to Informatica, userid <OBIEEroot>\DAC\logs
<DAC Task Name>_DETAIL.log Task log showing Informatica PMCMD trace, Informatica workflow log name, Informatica session log name, Informatica session summary info <OBIEEroot>\DAC\logs

 –> Informatica

Informatica ETL Log Name

Contents

Default Location

<Informatica Workflow Name>.log Includes Port, Repository Name, Folder Name, Userid, and Informatica Session Name $INFA_HOME/server/infa_shared/WorkflowLogs/
<Informatica Session Name>.log Server Mode, Server Code Page, Session Cache size, Source and Target $INFA_HOME/server/infa_shared/SessLogs/

The amount of detail in the session log depends on the tracing level that you set. You can define tracing levels for each transformation (Properties) or for the entire session. By default, the PowerCenter Server uses tracing levels for the session which are configured in the mapping in Workflow Manager > Edit Task > Override Tracing.

Other Important log locations are:-

 

 

 

 

 

 

 

 

Oracle BI Apps – Troubleshooting – DAC and Informatica Logs – Part 2

The DAC console controls ETL. It runs an Execution Plan for the ETL, which contains Tasks.

Excerpts of information recorded in the DAC logs are displayed in the DAC console > Current Run > Tasks and Task Details Tabs. This means that you can choose to use the DAC Execution Views to drill down on unexpected behavior, or you can use the DAC logs.

A typical ETL investigation process would be: 

1) Start off with default logging levels, to avoid system resource overload
2) Review the DAC Execution > Current Run > Tasks and Tasks Details

 

3) Review the Tasks, filtered by ‘Failed Tasks’ (colored pink)

 

4) Note the Task Name, Execution Type, and Task Phase
5) Review the Task Details, filtered by ‘Failed’ Tasks.
6) Sort the Failed tasks to find the one that failed first.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

• Note the ‘Task Name’ (same as above), ‘Name’ (=Informatica workflow name), and ‘End Timestamp’. Double click on the ‘Status Description’ column, which will in give you further information, in context of the type of task which failed. For example if an Informatica session was being executed (Task Execution Type= Informatica), and has failed, it will give the session name, on the other hand, if the DAC task that failed was “internal”, it will not be relevant to give an Informatica session name.

Note: Alternatively you could use the DAC logs, starting with the etl_summary, to obtain similar info.

By doing above analysis now we have:-

• Obtained an overview of the ETL execution
• Found out which Informatica workflow and sessions (and their logs) to use if we need to find more detailed info.

The post Oracle BI Apps Troubleshooting DAC and Informatica appeared first on Oracle for All.

Weblogic: Managed Server: failed hostname verification check.

$
0
0

Weblogic: Managed Server:  failed hostname verification check.

While attempting to start Weblogic Managed Server for OAM, I received below error

<09-Jun-2015 13:26:46 o'clock BST> <Warning> <Security> <BEA-090504> <Certificate chain received from oraworld.com - 192.168.0.19 failed hostname verification check. Certificate contained iam.oraworld.com but check expected oraworld.com>

This error  basically means that when Weblogic Server tries to validate the certificate,it compares the CN of the certificate with the hostname from where the request is coming from. If they don’t match, hostname verification fails and SSL connection is not established.

To resolve this issue, you can turn off host name verification in one of the following ways:

a) From Command Line

-Dweblogic.security.SSL.ignoreHostnameVerification=true

b) From Weblogic Console

  1. If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit
  2. In the left pane of the Console, expand Environment and select Servers.
  3. Click the name of the server for which you want to disable hostname verification.
  4. Select Configuration > SSL , and click Advanced at the bottom of the page.
  5. Set the Hostname Verification field to None.
    Oracle recommends leaving host name verification on in production environments.
  6. Click Save.
  7. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
  8. Restart Server for changes to take place.

The post Weblogic: Managed Server: failed hostname verification check. appeared first on Oracle for All.

OBIEE: BI Server: Important configuration files and log files location

$
0
0

 

I’ve written this short post as just a note to myself quite some time back. Since I had to rely on it quite a couple of times, I thought it would be worth sharing it with our readers.

a) Location :$MW_HOME/user_projects/domains/bifoundation_domain/servers/bi_server1/logs
Files: bi_server1.log, bi_server1-diagnostic.log, bi_server1.out and access.log

b) Location :$MW_HOME/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
Files: nqserver.log and nqquery.log

c) Location :$MW_HOME/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
Files: sawlog.log

d) Location :$MW_HOME/instances/instance1/config/OracleBIServerComponent/coreapplication_obis1
Files: NQSConfig.INI and DBFeatures.INI

e) Location :$MW_HOME/instances/instance1/config/OracleBIPresentationServicesComponent/coreapplication_obips1
Files: instanceconfig.xml and credentialstore.xml

f) Location : $MW_HOME/user_projects/domains/bifoundation_domain/config
Files: config.xml

g) Location :$MW_HOME/user_projects/domains/bifoundation_domain/config/fmwconfig
Files: system-jazn-data.xml

The logs can also be seen from FMW EM Console.

a) Login to the URL http://server.domain:7001/em and navigate to:
b) Farm_bifoundation_domain-> Business Intelligence-> coreapplications-> Dagnostics-> Log Messages
c) You will find the available files:

Presentation Services Log
Server Log
Scheduler Log
JavaHost Log
Cluster Controller Log
Action Services Log
Security Services Log
Administrator Services Log

The post OBIEE: BI Server: Important configuration files and log files location appeared first on Oracle for All.


Oracle Enterprise Linux: forgotten: reset root password

$
0
0

Recently it so happen that I forgot the root password for one of the Servers but apparently its quite easy to reset the root password. Just follow the below steps:-

  1. Press F2 when the splash screen comes up
  2. A GRUB screen will display
  3. enter the letter “e” (without quotes)
  4. Using the arrow keys, move the cursor to the line for kernel
  5. Enter the letter ‘e’ again.
  6. You will see a command line
  7. After the last word/character append a space and the word single (single mode)
  8. Hit Enter
  9. Make sure the cursor is on the kernel line
  10. enter the letter ‘b’ (this will boot)
  11. System will load into single mode, should see
  12. Enter passwd
  13. Enter new password two times
  14. Enter exit (this will reboot)
  15. Test the new root password voila

The post Oracle Enterprise Linux: forgotten: reset root password appeared first on Oracle for All.

Oracle BIA Full or Incremental Load work

$
0
0

It’s possible to configure either a Full- or an Incremental Load in Oracle BIA. If you look at the Informatica version of Oracle BIA, there are a few areas you will have to configure.

First you start with the Informatica Mapping. This will be one Mapping. It does not matter whether you run this Mapping Full or Incremental.

Lets take the ‘SDE_ORA_GLJournals’-Mapping as an example. In the Source Qualifier of the Mapping (or Mapplet), you will see a reference to to the $$LAST_EXTRACT_DATE. If you would run the Mapping with these settings, you will run an Incremental Mapping. This means that you only select the data which is created / updated since the last ETL-run.

The $$LAST_EXTRACT_DATE is a Parameter which you configure in the Datawarehouse Administration Console (DAC) and reference in Informatica.

According to the Oracle documentation, the “@DAC_SOURCE_PRUNED_REFRESH_TIMESTAMP. Returns the minimum of the task’s primary or auxiliary source tables last refresh timestamp, minus the prune minutes.”

Make sure this Parameter is available in both the DAC (see above) as well as in the Mapping (or Mapplet).

This way the Parameter can be used in the Extraction Mapping. If you reference a Parameter in the Extraction Mapping Query which isn’t declared, the Workflow will return an error and won’t complete.

So the steps are easy;

1. Declare the $$LAST_EXTRACT_DATE-Parameter in the DAC
2. Declare the $$LAST_EXTRACT_DATE-Parameter in Informatica
3. Reference the $$LAST_EXTRACT_DATE-Parameter in the Source Qualifier

As I said before, the same Mapping is used for the the Incremental- as well as the Full-Load. If you want to run the two different loads, make sure there ar two different Workflows which run the same mapping. The difference is in the mapping of the Workflow. The Full-Workflow uses the $$INITIAL_EXTRACT_DATE whereas the Incremental-Workflow uses the $$LAST_EXTRACT_DATE.

If you edit the task which belongs to the Incremental-Workflow (‘SDE_ORA_GLJournals’), you will find the Source Qualifier with the extraction query and a reference to the $$LAST_EXTRACT_DATE-Parameter.

As you can see, the LAST_UPDATE_DATE is compared to the $$LAST_EXTRACT_DATE-Parameter.

After each ETL-run, the LAST_EXTRACT_DATES (Refresh Date) per table are stored. You can check, update or delete these values as per requirement (see picture below). If you decide to delete the Refresh Date, a Full Load ill be performed the next time.

As stated earlier, the Full-Workflow is almost identical. The only thing is that there is a reference to the $$INITIAL_EXTRACT_DATE. The $$INITIAL_EXTRACT_DATE-Parameter is defined in the DAC. You define a date in the past. Just make sure that this date captures all the data you need.

Make sure this Parameter is available in both the DAC (see above) as well as in the Mapping (or Mapplet).

This way the Parameter can be used in the Extraction Mapping. If you reference a parameter in the Extraction Mapping Query which isn’t declared, the Workflow will return an error and won’t complete.

How do you make sure that the $$INITIAL_EXTRACT_DATE-Parameter will be used when running a Full-Load?

If you edit the task which belongs to the Incremental-Workflow (‘SDE_ORA_GLJournals_Full’), you will find the Source Qualifier with the extraction query and a reference to the $$INITIAL_EXTRACT_DATE-Parameter.

As you can see, the LAST_UPDATE_DATE is compared to the $$INITIAL_EXTRACT_DATE-Parameter.

At this point everything is in place to either run a Full-, or an Incremental Load.

You just have to tell the DAC to either run the ‘SDE_ORA_GLJournals_Full’-Workflow or the ‘SDE_ORA_GLJournals’-Workflow (incremental)

Check the Informatica Session Log when the ETL has a another result than expected. It could be that the Workflows are incorrectly defined. You will see in the Session Log which Parameter is used and what the value of that Parameter is.

The post Oracle BIA Full or Incremental Load work appeared first on Oracle for All.

Logging into DAC for the First Time

$
0
0

When you log into DAC for the first time, you must first configure a connection to connect to the DAC Repository. DAC stores this connection information for subsequent logins.

DAC Repository Database Authentication File

When you configure a connection to the DAC Repository, the configuration process includes creating a new authentication file or selecting an existing authentication file. The authentication file authenticates the database in which the repository resides. If you create a new authentication file, you will specify the table owner and password for the database.

A user with the Administrator role must distribute the authentication file to any user account that needs to access the specified DAC Repository.

To log into DAC for the first time

  1. Start the DAC Client by navigating to the $ORACLE_HOME\bifoundation\dac directory and double-clicking the startclient.bat file.The Login … dialog box appears.This dialog box is described in the surrounding text.
  2. Click Configure.
  3. In the Configuring … dialog box, select Create Connection, and then click Next.
  4. Enter the appropriate connection information:
    Field Required Value
    Name Enter a unique name for the connection to the DAC Repository.
    Connection type Select the type of database in which the DAC Repository will be stored.
    Connection String, or Database name, or TNS Name, or Instance Select the database name or database account name of the DAC Repository.

    If you are using:

    • Oracle (OCI8), use the tnsnames entry.
    • Oracle (Thin), use the instance name.
    • SQL Server, use the database name.
    • DB2-UDB, use the connect string as defined in the DB2 configuration.
    Database Host Enter the name of the machine where the DAC Repository will reside.
    Database Port Enter the port number on which the database listens. For example, for an Oracle database the default port is 1521, or for a SQL Server database the default port is 1433.
    Optional URL Can be used to override the standard URL for this connection.
    Optional Driver Can be used to override the standard driver for this connection.
    Authentication File Click in this field to do one of the following:
    • Select an existing authentication file:Navigate to the appropriate location, select the authentication file , and click OK.
    • Create a new authentication file:Navigate to the folder where you want to save the authentication file, and click OK.

    Proceed to the next step for detailed instructions.

  5. To select an existing authentication file, do the following:
    1. Click in the Authentication File field of the Configuring… dialog box.
    2. In the Authentication File dialog box, select Choose existing authentication file.
    3. Navigate to the appropriate folder, and select the authentication file. Click OK.
    4. In the Configuring… dialog box, click Test Connection to confirm the connection works.
    5. Click Apply, and then click Finish.

      Note:

      You must distribute this authentication file to all user accounts that need to access this DAC Repository.

  6. To create a new authentication file, do the following:
    1. Click in the Authentication File field of the Configuring… dialog box.
    2. In the Authentication File dialog box, select Create authentication file.
    3. Navigate to the folder where you want to save the new authentication file, and click OK.
    4. In the Create Authentication File dialog box, enter a unique name for the authentication file, and click OK.
    5. Enter the Table Owner Name and Password for the database where the repository will reside.
    6. In the Configuring… dialog box, click Test Connection to confirm the connection works.
    7. Click Apply, and then click Finish.

      Note:

      You must distribute this authentication file to all user accounts that need to access this DAC Repository.

  7. In the Login… dialog box, do the following:
    1. Select the appropriate Connection from the drop-down list.
    2. Enter Administrator as the User Name.
    3. Enter Administrator as the Password.
    4. Click Login.
  8. If asked whether you want to create or upgrade the DAC Repository schema, click Yes.

The post Logging into DAC for the First Time appeared first on Oracle for All.

What is difference between Informatica & DAC sheduleing

$
0
0

1. DAC allows you to execute based on a “functional” subject area/module. This allows you to run a load for a functional module. INFA scheduler does not have this.

2. There is parallelism logic pre built based on the ETL stages, tasks, etc. You will have to manually set this up in INFA scheduler which is a pain.

3. DAC has pre built functionality to add/drop indexes/run stats, etc. INFA scheduler you will have to manually set this up. For example, suppose multiple tasks load the same table, the DAC will automatically add indexes after the last load. Dac can also differentiate ETL versus Query indexes.

4. DAC has pre built logic for FULL/INCREMENTAL loads via refresh dates. You can do this in INFA but again. It’s more work than is needed.

5. DAC allows you to configure key parameters for BI Apps loads. It dynamically uses a set of parameter files during run time to pass key values based on OBIA configuration.

6. In general, if you plan to use OBIA, using DAC will save you a lot of time and effort. Perhaps for custom INFA ETls, you can use INFA scheduler. For complex pre built loads for OBIA, it makes more sense to use DAC.

7. Suppose OBIA has a patch or new set of mappings or if you upgrade, Oracle will provide the DAC metadata whereas using INFA Scheduler you are on your own for any updates or variations in ETL load processes.

8. DAC orders the many of the Informatica task and issues command to run one by one.

9. When there are more then 300+ tasks (typically for any one module of BI APPS) it is impossible to control the tasks flow and order of execution by Informatica scheduler.

10. Also when there are many conformed dimension like one example (Organizations table is getting loaded in HR, SCM, FIN etc.) and controlling incremental and full load for conformed dimension done by DAC.

11. If you are using Informatica scheduler for your own custom grown applications you need not to use DAC for this. If you are using Custom ETL tasks along with BI APPS then I would say DAC comes handy.

The post What is difference between Informatica & DAC sheduleing appeared first on Oracle for All.

OBIA Financial Analytic AP Process

$
0
0

Today we are looking at the basic process of AP or Accounts Payable in Oracle EBS for financial analytic. This is to give you a basic understanding of how AP process works for most companies that implements Oracle EBS. In order to build AP reports out of EBS source, you need to understand the basics.

Although different companies will have different accounting process and different product dealings, the basic AP process is comparable.

Based on the below diagram, the AP process includes 4 main blocks:

Purchase Orders:

These are the purchase orders that the company has decided to execute through Oracle IProcurement. It basically means that the company has decided to buy, this can include items for the office, materials that the company products need, the contract for consultants, laptops, speakers, insurance services or electricity and gas.

PO usually consist of 3 hierarchies:
Header – highest level.
Line – under each header, there can be multiple lines.
Distribution – Under each line ordered, it can be used for different locations or cost centers. Manage A from this department uses it for a few weeks and then Manager B from another department uses it. From accounting perspective, it may belong to different cost centers

Invoices:
This is when the invoices of these PO have been entered. They have mirror the hierarchical structure as Purchase order. Each of the 3 AP Invoice tables can join to the corresponding PO tables.

Receives
Now that the company has been invoiced by vendors, they will receive goods and services that they purchase. The act of receiving is recorded in RCV_TRANSACTIONS table as transactions. Note that receives are at invoice line level. If your company makes a lot of order from one vendor and the goods will be delivered throughout a period of time continuously, you may want to check the Receipt_Date and take the Max of this date for each transaction, or it could potentially influence the grain of each invoices. As an example, most of the time you are interested in seeing how much you have totally spent in your trip to the local grocery store but not necessarily interested at each items you purchase, or at the end of a period, how many chairs does your company receive rather than how many chairs received on each day.

Hold
Hold indicates the status of your company’s decision on handling the invoices. Let’s say after receiving the invoices, the department realized that it doesn’t have all of the goods that they thought they ordered. This could be categorized into 2 main reasons:

1. The company makes a mistake in dealing with the transactions. Maybe the goods are delivered to the wrong department or miscounted. The error lies within the company, therefore they are still responsible for making the payment on time according the to invoice.

2. The company thinks that the vendor has not completely delivered their services according to the PO. The order says 100 laptops but the company only receives 95, therefore the company puts this invoice on hold and waiting for the delivery of the remaining 5.

These information will be stored in AP_INVOICE_HOLD_DETAIL table and HOLD_TYPE will indicate what type of hold this invoice is. For each Invoice_ID, there can be multiple HOLD_LOOKUP_CODE to tell you the specific reasons for being on hold.

Now if the hold is deemed to be vendor not fulfilling its responsibility, the HOLD_RELEASE_DATE column will be used for the invoice payment due date, that is, until the error has been corrected, the hold will be released.

The payment_date and due_date are stored in various tables, but AP_PAYMENTS_ALL, which is the block at the bottom of the 4 blocks in the diagram, generally stores this information.

So for accounting purposes, the payment date and due date are based on HOLD conditions.

The process of determining what HOLD condition is, or whether there should be a hold or not, the called 3-way match, which means you have to compare the PO to invoice to the receipts to see if they matches or not.

If there is reporting requirements that needs OBIEE report that does these kind of comparisons, you should already know the logic and the type of report that will make sense.

The post OBIA Financial Analytic AP Process appeared first on Oracle for All.

Viewing all 144 articles
Browse latest View live