Monday, 22 April 2013

InfoSource

An InfoSource combines quantity of information that logically belongs together, summarized into a single unit. It prepares consolidated data for updating to the data targets. InfoSources contain either transaction data or master data (attributes, texts and hierarchies).
When you want to load the related data from different datasources to the data targets, Infosource comes into picture. 
Role of InfoSource in SAP BW- 
          
There is a difference in use of InfoSource between BW and BI. 


In BW, a DataSource is assigned to an InfoSource. If fields that logically belong together exist in various source systems, they can be grouped together into a single InfoSource in BW, in which multiple DataSources can be assigned to an InfoSource.
In the BW Processing Transfer Rules, individual DataSource fields are assigned to the corresponding InfoObject of the InfoSource. Here you can also determine how the data for a DataSource can actually be transferred to the InfoSource. The uploaded data is transformed using transfer rules. An extensive library of transformation functions that contain business logic can be used here to perform data cleansing and to make the data analyzable.
The transfer structure is used to transfer data to the BW system. The data is transferred 1:1 from the transfer structure of the source system into the BW transfer structure.

If you are dealing with an InfoSource with flexible updating, then the data is updated from the communication structure into the InfoCube into other data targets with the aid of the Update Rules. InfoSources with direct updating permit master data to be written directly (without update rules) into the master data tables.

Role of infosource in SAP BI- 
In contrast to 3.x InfoSources, as of Release SAP NetWeaver BI 7.0, an InfoSource behaves like an InfoSource with flexible update
The data in an InfoSource is updated to an InfoProvider using a transformation.
You can define the InfoObjects of the InfoSource as keys. These keys are used to aggregate the data records during the transformation.
1. Data Flow Without an InfoSource:
                                                 
The DataSource is connected directly to the target by means of a transformation.
Since there is only one transformation, performance is better.
However, if you want to connect multiple DataSources with the same structure to the target, this can result in additional maintenance effort for the transformation, since you need to create a similar transformation for each DataSource.
You can avoid this if the DataSource is the same, it just appears in different source systems. In this case, you can use source system mapping when you transport to the target system so that only one transformation has to be maintained in the test system. The same transformation is created automatically for each source system in the production system.

2. Data Flow with One InfoSource

                     
The DataSource is connected to the target by means of an InfoSource. There is one transformation between the DataSource and the InfoSource and one transformation between the InfoSource and the target.
We recommend that you use an InfoSource if you want to connect a number of different DataSources to a target and the different DataSources have the same business rules. In the transformation, you can align the format of the data in DataSource with the format of the data in the InfoSource. The required business rules are applied in the subsequent transformation between the InfoSource and the target. You can make any changes to these rules centrally in this one transformation, as required.
3. Data Flow with Two InfoSources
                                    
We recommend that you use this type of data flow if your data flow not only contains two different sources, but the data is to be written to multiple identical (or almost identical) targets. The required business rules are executed in the central transformation so that you only have to modify the one transformation in order to change the business rules. You can connect sources and targets that are independent of this transformation.

Data Extraction



Data Extraction (Loading data from OLTP servers to OLAP (BW/BI) servers)
OLTP server stores daily transactions data. For a bank all the transactions related to money debit/credit to any account, loan, employee related data etc will come under transactional data. They are used to support real time business. But for analysis purpose we do not need all data at each transaction level. For this purpose we load data from transactional servers to BW/BI Servers. We extract data/fields which are necessary for the analysis of the business and will help in taking business related decisions.

You may load transactional data to BW/BI servers on daily basis/weekly basis/monthly basis. Frequency of loading or refreshing the data depend on the criticality of the decision taken based on that data. If any decision which is very critical and can impact short term business decisions, we need to refresh the data on daily basis.

Data Extraction Process
Basis of extraction is OLTP extraction tables. So R/3 OLTP source system must be replicated to BW. To replicate the Metadata from a source system into BW for an application component, choose Source System Tree->Required Source System->DataSource Overview ->The user Application Components ->ContextMenu (right mouse click)->Replicate DataSources in the BW Administrator Workbench.

To update all the Metadata of a source system-Choose Source System Tree requires

Source System ->Context Menu (right mouse click)->Replicate DataSources in the BW

Administrator Workbench


  • Replicate the data source to BW server
  • Go to transaction code RSA3-> Check data is available related to your Data Source.
  • If yes-> Go to transaction code LBWG (Delete Setup data) -> Delete the data by entering the application name. 
  • Go to transaction SBIW --> Settings for Application Specific Data source --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)