Quantcast
Channel: SCN : Blog List - SAP Planning and Consolidation, version for SAP NetWeaver
Viewing all 152 articles
Browse latest View live

BPC on NW BW with HANA - sizing aspects

$
0
0

Recently as part of  new SAP implementation program , got a requirement to implement BPC 10 on NW  BW on HANA . 

As many of you know BPC is an add-on on SAP NW BW and you can set up a  seperate landscape for BPC  and keep your core BI/BW  reportigng in a seperate  landscape

 

Considering several overheads, TCO and data duplication decided to go with same landscape for BPC on BW as well as core BW reporting .  This means there will be only one DB instance and one schemea , both BPC and core BW will share the same resources.

 

SAP  quick sizer offers sizing for BW powered by HANA for the database , as we have BPC also while sizing for such an environment , we have to consider the cube requirements for BPC as well .Whiel using the tool we need to consider the cube requirements from core BW reporting and BPC . Main challenge to do the initial sizing was that the customer has got several smal legacy databases for different business operations  along with using excel/microsoft mased tools to run some of the core processes.

 

Since there is no exisitng BW system , we can not use the standard SAP report to come up with sizing for BW on HANA. Possible option to do sizing using quick sizer for SAP BW powered by HANA.

 

If we dont have the  details for quick sizing , we can also do the sizing based on the database foot print ouf existing legacy databases , you will  have to consider the compression ratio as well  while doing the HANA sising from  the legacy databases footprints

 

 

Use the below formula to come up with HANA memory sizing:

 

RAM= ( X* 2 / 4 + Y / 1.5 ) * c)+ 50 GB

 

  • X= columnstore tablefoot print ( HANA requires double the meomry as that of column store data and apply a compression factor 1/4)
  • Y = Rowstore  table footprint ( compression factor 2/3)
  • c = compression factor of source database
  • 50GB is the memory required for HANA caches and HANA components

 

 

In case the rowstore data is not available , use below assuming a standard rowstore size of 60GB 

RAM = (S – 60GB) * 2 / 4 * c) + 90 GB

  • S is  source database size , 60GB row store , hence (S - 60GB) will be your columnstore footprint
  • This will  have to be double ( for HANA runtime/dynamic) and apply a compression of 1/4, hence the factor 2/4
  • c= compression factor of source database
  • 60GB is rowstore , assume a compression factor 2/3  , the total row store requirement = 40GB . This 40GB added with  50B HANA caches/components will make it 90GB

SAP provides scripts for various databases to capture the footprint , please refer OSS note 1637145 and the attachements

 

Sample Calculation:

 

Inputs:

 

Table footprint of source database: 50 GB row store + 672 GB column store

SourceDB compressed by factor 1.8

 

Output:

 

RAM = (672 GB * 2 / 4 + 50 GB / 1.5) * 1.8 + 50 GB = 715 GB

Disk(persistence)= RAM * 4 = 715 GB * 4 = 2,860 GB

Disk (log) = 715 GB

 

Use BPC sizing guide which will help you to size your application servers

 

Though with latest HANA optimized BPC ( via component HANABPC ) , most of the processing happening in HANA DB layer, it is still important to size your application server . Please refer to the sizing guide for SAP BPC 10 versio for NW  available in service market place

 

Assuming EPM add-in and web frontend are the UIs , the below table provides the initial sizing recommedations for  50, 150 and 300 concurrent users

Use these recommendations to come up with initial sizing for the number of users in your organization.

BPC.jpg

Implementation categories are defined in the sizing guide as below

 

category.jpg

 

Those who are planning for new BPC 10  NW version  on HANA , use the below components:

 

HANA    - SP06 /latest revision

SAP BW : NW 7.4  ( SP04 is currently available, SP 05 will be released in Q4  -Dec 2013 , recommendation is to go with SP05 as it  will have BW specific enhancements ) or  you can go with NW 7.31 SP09

CPMBPC 801 SP04

 

In addition to regular add-on CPMBPC , add-on HANABPC is also requied  if your DB is SAP HANA

 

HANABPC 801 SP02  - SAP HANA component for Planning and Consolidation , to take advantage of HANA optimized BPC

 

 

 

 

While considering the HANA appliance configuration/sizig , please consider notes

 

1855041 - Sizing Recommendation for Master Node in BW-on-HANA

1702409 - HANA DB: Optimal number of scale out nodes for BW on HANA

1637145 - SAP BW on HANA: Sizing SAP In-Memory Database


csv file validations in sap BPC 7.5 nw

$
0
0

Applies to

SAP Business Planning and Consolidation 7.5 version for NetWeaver

Summary

Business has requested to set up and maintain a significant number of validation business rules in BPC NW. It might be time consuming to enter the validation business rules via BPC Admin Client as well as there is a risk of typo errors when doing this manually.

This document is primarily designed for BPC NW Administrators in order to simplify their work.

In Part 1 of this document it will be explained how to create underlying program to use during upload. In How to perform mass upload of validation business rules into BPC 7.5 for NetWeaver - Part 2 it will be shown how to create CSV files with validation rules and upload them.

Background Information

When there is a considerably significant number of validation business rules which are to be entered in the system it might become a nightmare for a BPC NW Administrator to enter them into the system manually. Entering validation business rules via BPC Admin Client might take long time and a risk of typos exists.

It might be considered to be easier to maintain validation business rules in a separate MS Excel workbook and then upload them into the system. This document will demonstrate a procedure how to upload validation business rules from local CSV files.

The concept of the approach is the following:

1.    Prepare validation business rules in MS Excel using the provided templates.

2.    Save MS Excel files as CSV.

3.    Upload the files using the provided program via NetWeaver.

4.    Run validation of the uploaded business rules via BPC Admin Client.

Notice that the program performs minimum validation of the entered data. The full validation is performed at step 4 via standard functionality of BPC Admin Client.

The program was built and tested on the following configuration. With other Service Packs it was not tested.

·        BPC 7.5 for NetWeaver SP11

·        SAP NetWeaver 7.0.1. SP05.

Prerequisites

Required/recommended expertise or prior knowledge

·        SAP BusinessObjects Planning and Consolidation 7.5, version for SAP NetWeaver

·        ABAP programming skills

·        Access to SAP NetWeaver transaction codes: SE16, SE38, SE80.

·        Developer key for NetWeaver user.

Step-by-Step Procedure

Create ZUJ_VALIDATION_CSV_UPLOAD

The following steps describe how to implement the desired functionality.

1.    Log on to NetWeaver.

2.    Enter ABAP Editor (transaction SE38)

3.    In the Program field enter the name of the program, for example ZUJ_VALIDATION_CSV_UPLOAD.

4.    Choose (Create). You reach the ABAP: Program Properties <Name of Program> Change screen.

5.    Enter the title of the program.

6.    Under Type in the Attributes field, choose Executable Program and then Save.

7.    You reach the Create Object Catalog Entry dialog box.

8.    In the Attributes field under Package enter $TMP and save the program as a Local Object.

9.    The following screen appears with REPORT ZUJ_VALIDATION_CSV_UPLOAD. 

10.  Select all content from line 1 to line 10 and replace it with the code provided in at the bottom of this document. The result should be the following.

11.  Check the code by clicking on Check.

12.  Activate the program.

13.  The program is ready to be executed, however it is good to maintain labels for selection screens. Go to Text Symbols from the menu as shown below. 

14.  On Selection Texts tab enter the texts as shown below

P_APPL Appset ID
P_APPSET Application ID
XLSFILED Validation rule detail file
XLSFILEH Validation rule header file

15.  Activate and return to the previous screen.


The program is fully ready. If you click Execute you should see the following screen.

*&---------------------------------------------------------------------**& Report ZUJ_VALIDATION_CSV_UPLOAD*&*&---------------------------------------------------------------------**&*&*&---------------------------------------------------------------------*
REPORT ZUJ_VALIDATION_CSV_UPLOAD.
CONSTANTS: l_tab_name_h TYPE tabname VALUE'UJP_VALIDATIONH',
l_tab_name
TYPE tabname VALUE'UJP_VALIDATION'.
TYPES: BEGINOF VALIDATION_RULE_HEADER,
MANDT
TYPE MANDT,
APPSET_ID
TYPE UJ_APPSET_ID,
APPLICATION_ID
TYPE UJ_APPL_ID,
SEQ
TYPE UJ_SMALLINT,
VALIDATION_ID
TYPE UJ_VALIDATION_ID,
VAL_CHECK
TYPE UJ_VALIDATION_CHECK_TYPE,
R_SELECTION
TYPE UJ_SELECTION,
R_DESTINATION
TYPE UJ_SELECTION,
PERIOD
TYPE UJ_ID,
MAX_AMOUNT
TYPE UJ_SMALLINT,
COMMNT
TYPE UJ_DESC,ENDOF VALIDATION_RULE_HEADER.
TYPES: BEGINOF VALIDATION_RULE_DETAIL,
MANDT
TYPE MANDT,
APPSET_ID
TYPE UJ_APPSET_ID,
APPLICATION_ID
TYPE UJ_APPL_ID,
SEQ
TYPE UJ_SMALLINT,
VALIDATION_ID
TYPE UJ_VALIDATION_ID,
SIGN_L
TYPE UJ_SMALLINT,
ACCOUNT_L
TYPE UJ_ACCOUNT_ID,
SUBTABLES_L
TYPE UJ_FLOW_ID,
SIGN_R
TYPE UJ_SMALLINT,
ACCOUNT_R
TYPE UJ_ACCOUNT_ID,
SUBTABLES_R
TYPE UJ_FLOW_ID,
COMMNT
TYPE UJ_DESC,ENDOF VALIDATION_RULE_DETAIL.
DATA : L_RC TYPEI,
USER_ACT
TYPEI,
VH_FILENAME
TYPE FILETABLE,
VD_FILENAME
TYPE FILETABLE,
V_HEADER
TYPESTANDARDTABLEOF UJP_VALIDATIONH,
V_DETAIL
TYPESTANDARDTABLEOF UJP_VALIDATION,
LINE_VH
LIKELINEOF V_HEADER,
LINE_VD
LIKELINEOF V_DETAIL,
lr_data
TYPEREFTOdata,
appset_id
TYPE uja_appl-appset_id,
appl_id
TYPE uja_appl-application_id,
lo_biz_rule
TYPEREFTO if_uja_biz_rule.
DATA: itab TYPETABLEOF STRING,
idat_h
TYPETABLEOF VALIDATION_RULE_HEADER WITHHEADERLINE,
idat_d
TYPETABLEOF VALIDATION_RULE_DETAIL WITHHEADERLINE.
DATA: L_STR TYPE STRING,
SEQ_AS_CHAR(
5) TYPEC,
SIGN_L_AS_CHAR(
2) TYPEC,
SIGN_R_AS_CHAR(
2) TYPEC,
MAX_CHAR(
5) TYPEC,
lt_message
TYPE uj0_t_message,
ls_message
TYPE UJ0_S_MESSAGE,
N_LINES_H
TYPEI,
N_LINES_D
TYPEI.
FIELD-SYMBOLS <FS> typeany.
FIELD-SYMBOLS: <G_FS> TYPEANY. "Global field symbol which will hold a line of rule
PARAMETERS : XLSFILEH TYPE STRING OBLIGATORY,
XLSFILED
TYPE STRING OBLIGATORY,
p_appset
TYPE UJA_APPSET_INFO-APPSET_ID OBLIGATORY,
p_appl
TYPE UJA_APPL-APPLICATION_ID OBLIGATORY.


AT SELECTION-SCREENONVALUE-REQUEST FOR XLSFILEH.CALLMETHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOGEXPORTING
WINDOW_TITLE =
'Select the validation rule header file'
DEFAULT_EXTENSION =
'CSV'
FILE_FILTER =
'*.CSV'CHANGING
FILE_TABLE = VH_FILENAME
RC = L_RC
USER_ACTION = USER_ACTEXCEPTIONS
FILE_OPEN_DIALOG_FAILED =
1
CNTL_ERROR =
2
ERROR_NO_GUI =
3
NOT_SUPPORTED_BY_GUI =
4others = 5
.
IF SY-SUBRC <> 0.MESSAGEID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNOWITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.ENDIF.
IF USER_ACT = '0'.READTABLE VH_FILENAME INDEX1INTO XLSFILEH.ENDIF.
AT SELECTION-SCREENONVALUE-REQUEST FOR XLSFILED.CALLMETHOD CL_GUI_FRONTEND_SERVICES=>FILE_OPEN_DIALOGEXPORTING
WINDOW_TITLE =
'Select the validation rule detail file'
DEFAULT_EXTENSION =
'CSV'
FILE_FILTER =
'*.CSV'CHANGING
FILE_TABLE = VD_FILENAME
RC = L_RC
USER_ACTION = USER_ACTEXCEPTIONS
FILE_OPEN_DIALOG_FAILED =
1
CNTL_ERROR =
2
ERROR_NO_GUI =
3
NOT_SUPPORTED_BY_GUI =
4others = 5
.
IF SY-SUBRC <> 0.MESSAGEID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNOWITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.ENDIF.
IF USER_ACT = '0'.READTABLE VD_FILENAME INDEX1INTO XLSFILED.ENDIF.



START-
OF-SELECTION.
*validation of appset and application IDs* appset_id = p_appset.* appl_id = p_appl.callmethod cl_uja_appset=>get_appset_appl_captionexporting
i_appset_id = p_appset
i_application_id = p_applimporting
e_appset_id = appset_id
e_application_id = appl_idchanging
ct_message = lt_message.readtable lt_message transportingnofieldswithkey msgty = 'E'."if sy-tabix <> 0 then either appset or appl or combination of the ids is wrongif sy-tabix <> 0.LOOPAT lt_message INTO ls_message.WRITE ls_message-MESSAGE.ENDLOOP.EXIT.endif.
*BREAK-POINT.*EXIT.
CALLMETHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOADEXPORTING
FILENAME = XLSFILEH* FILETYPE = 'ASC'* HAS_FIELD_SEPARATOR = ','* HEADER_LENGTH = 0
READ_BY_LINE =
'X'* DAT_MODE = SPACE* CODEPAGE = SPACE* IGNORE_CERR = ABAP_TRUE* REPLACEMENT = '#'* VIRUS_SCAN_PROFILE =* IMPORTING* FILELENGTH =* HEADER =CHANGING
DATA_TAB = itabEXCEPTIONS
FILE_OPEN_ERROR =
1
FILE_READ_ERROR =
2
NO_BATCH =
3
GUI_REFUSE_FILETRANSFER =
4
INVALID_TYPE =
5
NO_AUTHORITY =
6
UNKNOWN_ERROR =
7
BAD_DATA_FORMAT =
8
HEADER_NOT_ALLOWED =
9
SEPARATOR_NOT_ALLOWED =
10
HEADER_TOO_LONG =
11
UNKNOWN_DP_ERROR =
12
ACCESS_DENIED =
13
DP_OUT_OF_MEMORY =
14
DISK_FULL =
15
DP_TIMEOUT =
16
NOT_SUPPORTED_BY_GUI =
17
ERROR_NO_GUI =
18others = 19
.IF SY-SUBRC <> 0.MESSAGEID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNOWITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.EXIT.ENDIF.


N_LINES_H =
0.LOOPAT itab INTO L_STR.IF SY-TABIX = 1. "SKIP THE FIRST (HEADER) LINECONTINUE.ENDIF.CLEAR idat_h.
SPLIT L_STR AT','INTO"idat_h-MANDT"idat_h-APPSET_ID"idat_h-APPLICATION_ID
SEQ_AS_CHAR
"hold seq in char variable because SPLIT will not convert it to number
idat_h-VALIDATION_ID
idat_h-VAL_CHECK
idat_h-R_SELECTION
idat_h-R_DESTINATION
idat_h-PERIOD
MAX_CHAR
"idat_h-MAX_AMOUNT
idat_h-COMMNT.
idat_h-MANDT =
''. "keep empty
idat_h-APPSET_ID = appset_id.
idat_h-APPLICATION_ID = appl_id.
idat_h-SEQ = SEQ_AS_CHAR.
"place the char SEQ_AS_CHAR to structure idat_h-SEQ
idat_h-MAX_AMOUNT = MAX_CHAR.APPEND idat_h TO V_HEADER.
N_LINES_H = N_LINES_H +
1.ENDLOOP.


CLEAR itab.
CALLMETHOD CL_GUI_FRONTEND_SERVICES=>GUI_UPLOADEXPORTING
FILENAME = XLSFILED* FILETYPE = 'ASC'* HAS_FIELD_SEPARATOR = ','* HEADER_LENGTH = 0
READ_BY_LINE =
'X'* DAT_MODE = SPACE* CODEPAGE = SPACE* IGNORE_CERR = ABAP_TRUE* REPLACEMENT = '#'* VIRUS_SCAN_PROFILE =* IMPORTING* FILELENGTH =* HEADER =CHANGING
DATA_TAB = itabEXCEPTIONS
FILE_OPEN_ERROR =
1
FILE_READ_ERROR =
2
NO_BATCH =
3
GUI_REFUSE_FILETRANSFER =
4
INVALID_TYPE =
5
NO_AUTHORITY =
6
UNKNOWN_ERROR =
7
BAD_DATA_FORMAT =
8
HEADER_NOT_ALLOWED =
9
SEPARATOR_NOT_ALLOWED =
10
HEADER_TOO_LONG =
11
UNKNOWN_DP_ERROR =
12
ACCESS_DENIED =
13
DP_OUT_OF_MEMORY =
14
DISK_FULL =
15
DP_TIMEOUT =
16
NOT_SUPPORTED_BY_GUI =
17
ERROR_NO_GUI =
18others = 19
.*data str type string.IF SY-SUBRC <> 0.* CONCATENATE sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4 INTO str.* write str.MESSAGEID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNOWITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.* EXIT.ENDIF.

N_LINES_D =
0.LOOPAT itab INTO L_STR.IF SY-TABIX = 1. "SKIP THE FIRST (HEADER) LINECONTINUE.ENDIF.CLEAR idat_d.
SPLIT L_STR AT','INTO"idat_d-MANDT"idat_d-APPSET_ID"idat_d-APPLICATION_ID
SEQ_AS_CHAR
"hold seq in char variable because SPLIT will not convert it to number
idat_d-VALIDATION_ID
SIGN_L_AS_CHAR
"idat_d-SIGN_L
idat_d-ACCOUNT_L
idat_d-SUBTABLES_L
SIGN_R_AS_CHAR
"idat_d-SIGN_R
idat_d-ACCOUNT_R
idat_d-SUBTABLES_R
idat_d-COMMNT.
idat_d-MANDT =
''. "keep empty
idat_d-APPSET_ID = appset_id.
idat_d-APPLICATION_ID = appl_id.
idat_d-SEQ = SEQ_AS_CHAR.
"place the char SEQ_AS_CHAR to structure idat_h-SEQ
idat_d-SIGN_L = SIGN_L_AS_CHAR.
idat_d-SIGN_R = SIGN_R_AS_CHAR.APPEND idat_d TO V_DETAIL.
N_LINES_D = N_LINES_D +
1.ENDLOOP.
*BREAK-POINT.data: t_str type string,
ans(
8) typec.
L_STR = N_LINES_H.CONCATENATE L_STR ' rules and'INTO L_STR.
t_str = N_LINES_D.CONCATENATE L_STR ' ' t_str ' detailed rules will be updated for application ' appl_id ' in appset ' appset_id INTO L_STR.
CALLFUNCTION'POPUP_TO_CONFIRM'EXPORTINGTITLEBAR = 'Please confirm the update of validation business rules'* DIAGNOSE_OBJECT = ' '
TEXT_QUESTION = L_STR
TEXT_BUTTON_1 =
'Yes'"(001)
ICON_BUTTON_1 =
'ICON_OKEY'
TEXT_BUTTON_2 =
'Cancel'"(002)
ICON_BUTTON_2 =
'ICON_CANCEL'
DEFAULT_BUTTON =
'2'
DISPLAY_CANCEL_BUTTON =
''* USERDEFINED_F1_HELP = ' '
START_COLUMN =
25
START_ROW =
6* POPUP_TYPE =* IV_QUICKINFO_BUTTON_1 = ' '* IV_QUICKINFO_BUTTON_2 = ' 'IMPORTING
ANSWER = ans* TABLES* PARAMETER =EXCEPTIONS
TEXT_NOT_FOUND =
1OTHERS = 2
.CASE ans.WHEN1.CONCATENATE'The update has been confirmed and will be proceeded for application ' appl_id ' in appset ' appset_id into L_STR.WRITE: / L_STR.WHEN2.CONCATENATE'The update has been canceled and nothing has been updated for application ' appl_id ' in appset ' appset_id into L_STR.WRITE: / L_STR.EXIT.ENDCASE.
IF SY-SUBRC <> 0.* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.ENDIF.


lo_biz_rule = cl_uja_admin_mgr=>get_biz_rules( i_appset_id = appset_id
i_appl_id = appl_id ).
* Delete exising validation rules from the system
cl_uj_obj_dao=>delete_all( i_appset_id = appset_id
i_appl_id = appl_id
i_tabname = l_tab_name_h ).* Update validation rules from internal table to the systemcallmethod cl_uj_obj_dao=>set_tab_dataexporting
i_tabname = l_tab_name_h
it_data = V_HEADER.

L_STR = N_LINES_H.CONCATENATE L_STR ' rules are successfully written to validation header table'INTO L_STR.WRITE: / L_STR.
* Delete exising validation rules from the system
cl_uj_obj_dao=>delete_all( i_appset_id = appset_id
i_appl_id = appl_id
i_tabname = l_tab_name ).* Update validation rules from internal table to the systemcallmethod cl_uj_obj_dao=>set_tab_dataexporting
i_tabname = l_tab_name
it_data = V_DETAIL.

L_STR = N_LINES_D.CONCATENATE L_STR ' detail rules are successfully written to validation detail table'INTO L_STR.WRITE: / L_STR.* BREAK-POINT.

How To: Change Work Status from Script Logic using BAdI

$
0
0

Hello Folks,

 

I hope you are doing good.

A couple of days ago I came across a situation that is, a little bit, unusual. I was dealing with work statuses and locking on certain data regions.

In short, the scenario that I'm working on demands that a certain ENTITY owner locks several [VERSION,TIME] (VERSION is a CATEGORY dimension) combinations in one shot. Instead of letting him do this manually, I thought of using Script Logic.

 

I already knew that Script Logic would not contain a straightforward way to do this (I mean the NW SL has a reduced feature set from its MS counterpart, and the MS version couldn't do this directly so...), so I thought to myself: Why don't I use BAdI?

After a little digging inside the BW object explorer, I found the ABAP classes that manage the Work Statuses and I found a little example built-in with a sample program to test those classes right inside BW. What a jolly coincidence, thank you SAP!

 

I started copying and pasting the bits and pieces, modifying and shrinking the source to form just just what I needed.

I came up with the following code (Excuse my lousy coding, I met ABAP a week ago. Good thing it's object oriented!)

 

 

method IF_UJ_CUSTOM_LOGIC~EXECUTE.

 

******** Data and Type declarations

 

    TYPES: BEGIN OF t_dims,

            dimension   TYPE uj_dim_name,   " Dimension Name

            parameter   TYPE char20,        " Parameter Name

            modif_id    TYPE char04,        " Modification Group

            text_id     TYPE char20,        " Text(Label) Field

           END OF t_dims.

 

 

    DATA: user type UJ0_S_USER,

          user_id        TYPE uj0_s_user-user_id,

          gr_work_status_mgr type ref to CL_UJW_WORK_STATUS_MGR,

          lt_dim_mem TYPE ujw_t_dim_mem,

          ls_dim_mem LIKE LINE OF lt_dim_mem,

          gt_dims TYPE TABLE OF t_dims,

          gs_dims LIKE LINE OF gt_dims,

          gs_user TYPE uj0_s_user,

          appset_id      TYPE uja_appset_info-appset_id,

          incl_child     TYPE abap_bool,

          status         TYPE uj_status,

          gr_exception       TYPE REF TO cx_uj_static_check, "cx_ujw_work_status_error,

          ls_param TYPE ujk_s_script_logic_hashentry,

          l_log TYPE string,

          l_entity(32) TYPE c,

          l_time(32) TYPE c,

          l_status(32) TYPE c,

          l_version(32) TYPE c,

          application_id TYPE uja_appl-application_id.

 

    FIELD-SYMBOLS: <lv_member> TYPE ANY.

 

******** Since this is an implementation of the BADI_UJ_CUSTOM_LOGIC interface, I used the following parameters to get the Appset and the application IDs.

 

    appset_id = I_APPSET_ID.

    application_id = I_APPL_ID.

 

******** I hardcoded my user id just for testing purposes.

******** You can use an ABAP method that can get you the currently logged-on user. In fact this is what you should do!

    user_id = 'RIZKJ'.

    gs_user-USER_ID = user_id.

******** I do not want to include the children of the entity!

    incl_child = abap_false.

 

******** I started reading the parameters that will be passed from Script Logic:

    CLEAR ls_param.

    READ TABLE it_param WITH KEY hashkey = 'VERSION' INTO ls_param.

    IF sy-subrc NE 0.

    l_log = 'You have not specified the parameter ''VERSION'' which is required.'.

    cl_ujk_logger=>log( i_object = l_log ).

    RAISE EXCEPTION TYPE cx_uj_custom_logic.

    EXIT.

    ENDIF.

    l_version = ls_param-hashvalue.

    cl_ujk_logger=>log( i_object = l_version ).

 

    CLEAR ls_param.

    READ TABLE it_param WITH KEY hashkey = 'ENTITY' INTO ls_param.

    IF sy-subrc NE 0.

    l_log = 'You have not specified the parameter ''ENTITY'' which is required.'.

    cl_ujk_logger=>log( i_object = l_log ).

    RAISE EXCEPTION TYPE cx_uj_custom_logic.

    EXIT.

    ENDIF.

    l_entity = ls_param-hashvalue.

    cl_ujk_logger=>log( i_object = l_entity ).

 

    CLEAR ls_param.

    READ TABLE it_param WITH KEY hashkey = 'TIME' INTO ls_param.

    IF sy-subrc NE 0.

    l_log = 'You have not specified the parameter ''TIME'' which is required.'.

    cl_ujk_logger=>log( i_object = l_log ).

    RAISE EXCEPTION TYPE cx_uj_custom_logic.

    EXIT.

    ENDIF.

    l_time = ls_param-hashvalue.

    cl_ujk_logger=>log( i_object = l_time ).

 

    CLEAR ls_param.

    READ TABLE it_param WITH KEY hashkey = 'STATUS' INTO ls_param.

    IF sy-subrc NE 0.

    l_log = 'You have not specified the parameter ''STATUS'' which is required.'.

    cl_ujk_logger=>log( i_object = l_log ).

    RAISE EXCEPTION TYPE cx_uj_custom_logic.

    EXIT.

    ENDIF.

    l_status = ls_param-hashvalue.

    cl_ujk_logger=>log( i_object = l_status ).

********Here, we are reading the parameter that will contain the status. The status is a numerical code that is found in a table starting with "UJW_". Just search for it and you will find it!

    status = l_status.

 

******** Here, we are filling the dimension table that will contain the dimensions intersection that will be affected by the work status change.

      CLEAR ls_dim_mem.

      ls_dim_mem-dimension = 'MINING_ENTITY'.

      ASSIGN l_entity TO <lv_member>.

      ls_dim_mem-member = <lv_member>.

      APPEND ls_dim_mem TO lt_dim_mem.

      CLEAR ls_dim_mem.

      ls_dim_mem-dimension = 'TIME'.

      ASSIGN l_time TO <lv_member>.

      ls_dim_mem-member = <lv_member>.

      APPEND ls_dim_mem TO lt_dim_mem.

      CLEAR ls_dim_mem.

      ls_dim_mem-dimension = 'VERSION'.

      ASSIGN l_version TO <lv_member>.

      ls_dim_mem-member = <lv_member>.

      APPEND ls_dim_mem TO lt_dim_mem.

 

******** And this is where the magic happens! We instantiate the gr_work_status_mgr object with a factory object builder (nice use of design patterns!)

    gr_work_status_mgr = cl_ujw_work_status_mgr=>factory(

                               is_user   = gs_user

                               i_appset  = appset_id ).

 

******** And we call the method that will change the work status!

    TRY.

      gr_work_status_mgr->update_work_status_locks(

            EXPORTING

              i_applid        = application_id

              it_dim_mem      = lt_dim_mem

              i_incl_children = incl_child

              i_status        = status         ).

      MESSAGE  text-023 TYPE 'S'.

      CATCH cx_uj_static_check INTO gr_exception.

******** I caught the exception and did absolutely nothing with it! You should not! Log it or something...

    ENDTRY.

 

ENDMETHOD.

 

You have to excuse my fascination, I'm still relatively new to ABAP and I was surprised that it had incorporated some of the features of more modern programming languages. I was expecting something hideous like COBOL! (With all my excuses to all the COBOL advocates)

 

There are some comments that need to be put in place:

  • First: you can observe that this method is an implementation of the BADI_UJ_CUSTOM_LOGIC Business Add-In. This is necessary so you can call the method from Script Logic.
  • Second: In order to make the code available to Script Logic, you have to create a filter for the BAdI implementation. Let's suppose that you named the filter Z_LOCK_WS.

 

Now to call the code from Script Logic, you have to create a new Logic Script! Name it whatever you like and put inside the following piece of script:

 

*START_BADI Z_LOCK_WS

 

QUERY = OFF

WRITE = OFF

ENTITY = E100

TIME = 2012.01

VERSION = ACTUAL

 

*END_BADI

 

You can call this method multiple times for more than one intersection. You can even use a FOR loop and some data manager variables to call this for multiple intersections at once. In example:

 

*FOR %CURRENT_MONTH% = $MY_MONTHS$

 

*START_BADI Z_LOCK_WS

 

QUERY = OFF

WRITE = OFF

ENTITY = E100

TIME = %CURRENT_MONTH%

VERSION = FORECAST1

 

*END_BADI

 

*NEXT

 

This took me quite a bit to figure out and I hope it will be useful for some. All comments are welcome!

Have a great time,

Joseph.

Auto mail notification from SAP

$
0
0

Hi,

 

I would like to know "How to create auto mail nofication from SAP"

 

On monthly, we will be generating the 1000+ invoices to various vendors. After invoice generation, we manually download the invoice copies and send to the vendors manually. So, instead of sending manually....want to send auto mails.

 

Please suggest the solution

Enhanced BPC Mass User Management Tool including delete functions for BPC 7.5 NW

$
0
0

While I was working at the customer site for BPC NW 7.5 for implementing BPC secutiry, I found 'BPC Mass User Management Tool' doesn't support deleting functions!


So I asked a question to customer and she said she had to delete user, profiles and teams one by one using BPC admin console......

In addition, at the same time, one of my colleague asked same question to me so I decided to enhance it.

 

I am not an expert ABAP developer but thanks to my development experience with other programming language, I could enhance it.

After I made it, I tried to make it as a transport file but I gave up because it may trigger maintenance issue.


If a lot of BPC administrators / consultants need this tool, I will post it later including document so please leave a comment.

SAP Solution Manager System Monitoring for SAP Business Planning and Consolidation

$
0
0

Efficient and effective Technical Operations, i.e. running a system landscape like a factory, requires comprehensive system monitoring of managed system components. For SAP Business Planning and Consolidation, SAP Solution Manager can be used to implement monitoring and alerting (i.e. Run SAP like a Factory Event Management).

 

Content for SAP Solution Manager System Monitoring (Monitoring and Alert Infrastructure/MAI) for SAP Business Planning and Consolidation will be made available via MAI content update and documented in SAP Note and SDN document. Further information please find here: https://scn.sap.com/people/robin.haettich/blog/2013/12/31/sap-solution-manager-system-monitoring-for-sap-business-planning-and-consolidation

 

With best regards

Robin

Retraction from BPC 10.0 to BI

$
0
0

Hi All,

 

We are migrating from BPC 7.5 to BPC 10.0. At the time of retraction we faced an issue which is related to the Super Class which has been used for the Enhancement Spot 'UJD_RETRACT'.

 

The issue is with the syntax error below. Since the method is not available in the interface 'IF_UJ_MODEL' it is giving this error.

 

Capture.JPG

 

To resolve this issue, we need to do the below changes in the specified super class 'ZCL_BPC_RTRCT_SUPERCLASS'.

 

Before going to do any change to the super class, we need to create a package with the name 'ZBPC' (Package name which has been specified in the Super Class )

 

zbpc.JPG

 

How to Create Package :

 

Goto SE80

 

zbpc.JPG

Click on Yes

Z.JPG

Give some description and click on Enter, it will ask to create a Transport Request.

 

Now you need to replace the code with the new code at the following methods in the super class.

 

We need to pass the environment id to the 'i_appset_id', which we need to hard code lets say 'ENVIRONMENT_SHELL'.

 

GET_APP_STORAGE_TYPE Method  :

 

Z.JPG

GET_DIM_FROM_DIM_TYPE Method

 

zbpc.JPG

 

GET_DIM_LIST Method

 

Z.JPG

 

INITIALIZE Method - Here you need to comment the code where the load_all_dim_mbrs has been called.

 

zbpc.JPG

Done with the changes, now you can do the retraction properly from BPC to BI.

The Buzz about SAP Business Planning and Consolidations 10.1, version for SAP NetWeaver, powered by SAP HANA

$
0
0

There’s a buzz about the latest release of SAP Business Planning and Consolidation (BPC) 10.1, version for SAP NetWeaver, powered by SAP HANA. Having been released into the ramp-up program at the end of December this new release is now officially available for early access customers to download. But just what’s the buzz then about a seemingly normal software release?  It’s all to do with what this new release delivers, to both current BPC customers, as well as a wider audience of SAP customers who use a business planning solution based on the SAP NetWeaver platform.  Customers may now access new key features and functionality including:

 

·A NEW Unified model with tighter integration with SAP Business Warehouse to maximize investment

·A NEW HTML5 Web Client for smoother navigation and mobile readiness

·Enhanced Scalability & Performance for faster decision making leveraging SAP HANA

 

As described by Uwe Fischer in his recent blog, with SAP BPC 10.1 the Unified Model brings together the end user flexibility of the existing BPC NW data model (based on the BPC 10.0 paradigm) and the BW-IP / PAK based models (tight integration with BW).  The BPC Classic Model will continue to be supported in BPC 10.1NW.

 

Customers evaluating the BPC Unified Model, coming from a BPC 7.X/10.0 environment will be pleased to take advantage of the following new features available to them such as:

·Deeper BW Integration – ability to access data and masterdata from infocubes based on similar (Multiple Key Figure) data structure

·Planning Sessions – easy creation of “What-If” scenarios based on full data sets, optimized by SAP HANA

·Matrix Security

·Calculated Key Figures / Inverse Formulas - Dimension members defined by a formula* with input enablement

·Cell Locking - Keep measure values fixed upon calculations when performing items such as disaggregation’s or inverse formulas

 

Further highlights to the features and functionality that are introduced in this latest release are provided on the SAP Analytics Blog.

 

Want more? Then why not watch the new series of videos on YouTube, which have been created to allow you to experience what’s possible in SAP Business Planning and Consolidation 10.1, version for SAP NetWeaver. This series of 12 videos, which I’ll share in two blog posts, highlights a step-wise process, showing just how easy it is to incorporate BW-IP / PAK models into a new Unified Model. Here are links to the first six videos:

 

 

In this video, with the new HTML5 client for administration, leverage the integration native in SAP Business Planning and Consolidation 10.1 to create models based on existing BW InfoProviders.  Watch how existing BW MasterData can be consumed in the BPC Unified Model real time.

 

 

In the scenario above, witness how the Unified Model in SAP BPC 10.1NW uses the new HTML5 Admin layer to assign users to teams and configure work status.  This is similar to the functionality available in the BPC Classic paradigm and further moving to greater end user flexibility.

 

 

With the new HTML5 Web Client watch how easy Input Schedules can be created in the Unified Model based on BW queries.  The Web Client also allows for a greater level of mobile readiness. 

 

 

In this video, Business Process Flows (BPF’s) are introduced in the Unified Model.  This is a feature strongly adopted by the “Classic” BPC community and is now available for use by the BW-IP/PAK users.  With Business Process Flows, users are guided through a sequence of tasks within a defined business process (e.g. Sales Planning, Cost Center Planning).

 

 

Within BPF’s multiple tasks are available to use within the various processes.  In this video watch how the activities and tasks required for planning activities leveraging templates are created within the Unified Model.

 

 

Once a Business Process Flow has been created, create an instance for a particular time period for planning activities and initiate for users to leverage.

Looks easy doesn’t it? I hope that you’re now feeling the “buzz” about this release too! Don’t forget to watch out for links to the other six videos in my next blog post.



How To: Ask questions about Script Logic issues

$
0
0

After spending some time answering questions about script issues I've found the following:

 

In more than 60% of cases in order to give a useful answer I have to ask many additional questions to understand the issue!

 

I decided to summarize some generic rules to speed up the process:

 

1. Always provide BPC version and SP level of core BPC system (scripts work differently in 7.5 and 10 and even with different SP level).

2. For BPC NW 10 provide the K2 calculation engine used: ABAP or JAVASCRIPT (also difference in script execution).

3. Please describe models/applications: full list of dimensions and some members examples relevant to the script discussed. For Accunt dimension members provide ACCTYPE priperty.

4. Clearly identify the purpose of the script: to run as default.lgf or to be launched by DM package (different calculation logic!).

5. Describe the calculation logic using some pseudo formulas like: Amount=Price*Quantity etc. Add some description in words about calculation logic.

6. Provide FULL script that is not working (showing only part of the script without some previous scope changes can lead to misunderstanding).

7. The best option is to provide script tests in UJKT, including original script, Data Region content, PARAM content and log result with generated LGX.

8. Show report data before and after script run if the script is working, but generating incorrect results. Report data have to contain member names for better understanding.

9. If there are questions about advanced DM scripts - full text of the script have to be provided. Add also list of dimensions and text of script logic script.

 

Example:

 

-----------------------------------------------------------------------------------------------------------------

Topic name: Script write multiple of correct value

 

BPC NW 10 SP 11, Engine ABAP.

In the model SOMEMODEL we have the following dimensions: ACCOUNT (A), TIME (T), CATEGORY (C), ENTITY (E), CURRENCY (R), PRODUCT (User Def)

In the ACCOUNT dimension there are members PRICE (EXP),QUANTITY (EXP), DISCOUNT (EXP) and AMOUNT (EXP)

We want to calculate in default.lgf AMOUNT=QUANTITY*PRICE*(1-DISCOUNT). When user enter or change any or all members PRICE, QUANTITY and DISCOUNT in the input schedule and save data the AMOUNT have to be calculated.

 

The script:

 

*WHEN ACCOUNT

*IS PRICE

  *REC(EXPRESSION=[ACCOUNT].[QUANTITY]*%VALUE%*(1-[ACCOUNT].[DISCOUNT]), ACCOUNT= "AMOUNT")

*IS QUANTITY

  *REC(EXPRESSION=%VALUE%*[ACCOUNT].[PRICE]*(1-[ACCOUNT].[DISCOUNT]), ACCOUNT= "AMOUNT")

*IS DISCOUNT

  *REC(EXPRESSION=[ACCOUNT].[QUANTITY]*[ACCOUNT].[PRICE]*(1-%VALUE%), ACCOUNT= "AMOUNT")

*ENDWHEN

 

In UJKT we test this script with Data Region:

 

TIME=2013.09

CATEGORY=Actual

ENTITY=US

CURRENCY=LC

PRODUCT=Prod1

 

The values for ACCOUNTs in the report are:

 

PRICE: 100

DISCOUNT: 0.5

QUANTITY: 4

 

Expected result: 200=100*4*(1-0.5)

 

Result in UJKT:

 

LGX:

 

*WHEN ACCOUNT

*IS PRICE

*REC(EXPRESSION=[ACCOUNT].[QUANTITY]*%VALUE%*(1-[ACCOUNT].[DISCOUNT]), ACCOUNT= AMOUNT)

*IS QUANTITY

*REC(EXPRESSION=%VALUE%*[ACCOUNT].[PRICE]*(1-[ACCOUNT].[DISCOUNT]), ACCOUNT= AMOUNT)

*IS DISCOUNT

*REC(EXPRESSION=[ACCOUNT].[QUANTITY]*[ACCOUNT].[PRICE]*(1-%VALUE%), ACCOUNT= AMOUNT)

*ENDWHEN

 

-------------------------------------------------------------------------------------------------------------------------------------

LOG:

 

LOG ENTITYGIN TIME:2014-01-31 16:18:24

FILE:\ROOT\WEBFOLDERS\SOMEENV \ADMINAPP\SOMEMODEL\TEST.LGF

USER:V.KALININ

APPSET:SOMEENV

APPLICATION:SOMEMODEL

[INFO] GET_DIM_LIST(): I_APPL_ID="SOMEMODEL", #dimensions=7

ACCOUNT,ENTITY,MEASURES,TIME,CATEGORY,CURRENCY,PRODUCT

 

#dim_memberrset=4

TIME:2013.09,1 in total.

CATEGORY:Actual,1 in total.

PRODUCT:Prod1,1 in total.

ENTITY:US,1 in total.

CURRENCY:LC,1 in total.

 

REC :[ACCOUNT].[QUANTITY]*%VALUE%*(1-[ACCOUNT].[DISCOUNT])

 

CALCULATION ENTITYGIN:

QUERY PROCESSING DATA

QUERY TIME : 0.00 ms. 1  RECORDS QUERIED OUT.

QUERY REFERENCE DATA

QUERY TIME : 1.00 ms. 3  RECORDS QUERIED OUT.

CALCULATION TIME IN TOTAL :0.00 ms.

1  RECORDS ARE GENERATED.

CALCULATION END.

 

#dim_memberrset=4

TIME:2013.09,1 in total.

CATEGORY:Actual,1 in total.

PRODUCT:Prod1,1 in total.

ENTITY:US,1 in total.

CURRENCY:LC,1 in total.

 

REC :%VALUE%*[ACCOUNT].[PRICE]*(1-[ACCOUNT].[DISCOUNT])

 

CALCULATION ENTITYGIN:

QUERY PROCESSING DATA

QUERY TIME : 0.00 ms. 1  RECORDS QUERIED OUT.

QUERY REFERENCE DATA

QUERY TIME : 0.00 ms. 3  RECORDS QUERIED OUT.

CALCULATION TIME IN TOTAL :0.00 ms.

1  RECORDS ARE GENERATED.

CALCULATION END.

 

#dim_memberrset=4

TIME:2013.09,1 in total.

CATEGORY:Actual,1 in total.

PRODUCT:Prod1,1 in total.

ENTITY:US,1 in total.

CURRENCY:LC,1 in total.

 

REC :[ACCOUNT].[QUANTITY]*[ACCOUNT].[PRICE]*(1-%VALUE%)

 

CALCULATION ENTITYGIN:

QUERY PROCESSING DATA

QUERY TIME : 1.00 ms. 1  RECORDS QUERIED OUT.

QUERY REFERENCE DATA

QUERY TIME : 0.00 ms. 3  RECORDS QUERIED OUT.

CALCULATION TIME IN TOTAL :0.00 ms.

1  RECORDS ARE GENERATED.

CALCULATION END.

 

ENDWHEN ACCUMULATION: 1  RECORDS ARE GENERATED.

 

DATA TO WRITE BACK:

ACCOUNT    ENTITY    TIME    CATEGORY    PRODUCT    SIGNEDDATA

AMOUNT    US    2013.09    Actual    Prod1    600.00

1  RECORDS HAVE BEEN WRITTEN BACK.

WRITING TIME :6.00  ms.

 

SCRIPT RUNNING TIME IN TOTAL:8.00 s.

LOG END TIME:2014-01-31 16:18:32

 

And the report for:

 

ACCOUNT=AMOUNT

TIME=2013.09

CATEGORY=Actual

ENTITY=US

CURRENCY=LC

PRODUCT=Prod1

 

Shows the value for AMOUNT: 600 - 3 times more than expected.

 

What is the reason?

-----------------------------------------------------------------------------------------------------------------

Sample End

 

 

Best Regards,

Vadim

SAP Business Planning and Consolidations 10.1, version for SAP NetWeaver, powered by SAP HANA…have you got the buzz yet?

$
0
0

As mentioned in my previous blog there is a lot of noise and excitement about SAP Business Planning and Consolidation 10.1, version for SAP NetWeaver, powered by SAP HANA!  The Unified Model brings about a new era in planning, budgeting and forecasting for customers looking for an SAP BW based solution. 

 

With the new Unified Model, planning models in BW are no longer just centralized as they were with BW-IP/PAK, nor are they locally managed as they were with SAP Business Planning and Consolidation 7.x/10.0.  With the Unified Model, the best of both worlds are combined into one broad based planning environment that provides end user flexibility with the deep integration with your existingSAP investment.   The data and masterdata from SAP BW can still be leveraged within the planning model, yet local users can leverage the new HTML5 web client to perform functions such as Audit, Work Status and Business Process Flows and drive them at their leisure. 

 

Previously I invited you to take a tour through the first six of our twelve videos on YouTube.  Now I’d invite you to view the next six which highlight user driven functions such as Audit, Work Status and the review and adjustment of the plan.  Here are links:

 

 

With the new Unified Model see how Business Process Flows are used to execute the business planning process by first setting the Work Status.  Leveraging Work Status which was previously available in the BPC 7.X/10.0 models and now available in the Unified Model, users can be informed of the planning process status as well as use the function to lock down the process.

 

 

Watch how the Unified Model can initialize a plan by copying data from another version leveraging Analysis for Office.

 

 

See how the final piece of the Planning process can be executed, by adjusting the plan and then setting the Work Status “sent for approval”,leveraging the new HTML5 client.  Watch how charts can be utilized within the HTML5 client for reporting purposes

 

 

Continue executing final steps as a planner within the Unified Model but from a Reviewer’s perspective.  Use the HTML5 admin client to adjust the plan as needed and set the Work Status to an approved state.

 

 

Review the planning activities within the Process Monitor and see what tasks have been completed and what the status of current tasks are. 

 

 

Leverage the Audit function available in the new Unified Model to execute a Data Audit report to review results and changes that have been made within the planning process. 

 

I hope you’re even more excited now about the power available in the new Unified Model within SAP Business Planning and Consolidation 10.1. It’s an exciting time to be a part of the BPC community!

 

Lite Optimize - A little guide to the big things you should know

$
0
0

Hello

 

Anyone who works with BPC will no doubt be aware of the Lite Optimise process.  It's easy to trigger, runs relatively quickly and significantly improves performance.  Essentially, a Lite Optimise roll's up and compresses data in the BPC model / cube then updates the DB Statistics.

 

However, if you're new to BPC, there are a few other things you need to be aware of in order to help you understand the process a little more.  There may be more things to add to my list so feel free to comment on the post!

 

 

 

Be aware if people are trying to save !

 

If you were to take a closer look at the steps involved in a lite optimise via the process chain (/CPMB/LIGHT_OPTIMIZE) you will see a couple of important steps:

 

steps.png

 

Notice the highlighted steps above, in order for the roll-up and compression to work, the cube needs to be taken out of planning mode.  It is this planning mode which allows your Input Schedules to write updates back to the cube in real-time.

 

So, it's very important to understand that if the Lite Optimize process is running and the cube is not in planning mode the user will not be able to save data !!

 

Currently, BPC doesn't do a very good job of letting you know why you can't save (e.g., because a Lite Optimize is running) so be sure to trigger it when saving activity is at it's lowest!

 

 

 

Big data loads require a Lite Optimise

 

Any large data changes to a model need to be followed by a Lite Optimize.  This could include data loads from BW, clear and copy packages or substantial user submissions via Input Schedules.  You will begin to notice deterioration in performance the longer you go without a Lite Optimize!

 

In our environment, we have tagged a trigger for the Lite Optimize to kick off after every transactional load process into BPC from BW.  Admin users are also advised to kick off a Lite Optimize after running copy/clear/load operations  The next point explains how this can be done.

 

 

 

Trigerring a Lite Optimize outside of the EPM add-in

 

It is true, the lite optimize can be manually triggered or scheduled from the EPM add-in.  But what if you have a separate BPC process chain and want to automatically trigger a lite optimise once it is complete ? (e.g., daily data loads from BW into BPC)

 

You can do this as follows:

 

1. Create a new process chain with a start process

2. Create a new ABAP Program

3. Set the 'Program Name' to 'UJD_TEST_PACKAGE'

4. Create a new 'Program Variant'

5. Fill in the details (e.g., Package ID = Lite Optimize, Package Group ID = Other Function (or whatever folder the Lite Optimize package is stored within EPM)

6. Reference the newly created chain at the end of your other process chains.

 

If you do a search on UJD_TEST_PACKAGE you will find a lot more detail on these steps which all also allow you to trigger any package from BW !

 

 

 

Get rid of the 0's!

 

Whenever you run a clear package, you're not actually deleting data from the cube.  Infact, you are settings key figures to 0.  This would explain why you cannot delete a member from a dimension after 'clearing' all data. There is still data against that member, albeit zero's!

 

A great way to clear these 0's is to update your Lite Optimize process chain to clear 0's as part of the Collapse step in the chain /CPMB/LIGHT_OPTIMIZE.

Capture.PNG

 

You should really avoid using the selective deletion function on the cube!  One slip of the hand or an incorrect selection of data and you're in a spot of bother.

 

 

 

Lite Optimise before you back up an environment!

 

UJBR is a great BW program which allows you to backup and restore your BPC environment.  If you don't know about this, you should certainly do some searches to bring yourself up to speed!  We run this every night as an automated job to ensure we have a rolling 7 day backup of our BPC environment which we can easily roll back to without impacting the rest of BW.

 

Anyway, it can take quite a long time to backup your environment if you have a lot of data.  Using a Lite Optimize immediately before a backup will significantly improve your wait times !

 

 

 

 

That's all that comes to mind right now.  I hope this helps those of you who are starting out in BPC.  I certainly could have done with these tips early on !

 

As already mentioned, if you can add to this in anyway, please comment below.

 

Ian

Login issues in BPC 10

$
0
0

Hi All,

 

Many people are facing issues when logging into BPC 10 from the Web Client. Yes, I also had gone through a lot of such issues during the first few days. So I thought, it would be helpful if I share some of the troubleshooting I have done to get this ready.

 

1. First of all, always start by clearing the cookies and cache of your browser. If you are using some client browser (for eg. via Citrix), clear your local browser cookies and cache also. (It is weird, but had worked for me! )

 

2. The next important thing is that, BPC 10 needs Adobe Flash Player to display the portal. So if you are getting the login screen and after entering the credentials you just see a black screen or some browser error, it would probably be the Flash Player error. You need to have latest Adobe Flash Player ActiveX and Plugin installed in your system. Just make sure you have it. (ActiveX is for Internet Explorer and Plugin is for other browsers. I had ActiveX and was using IE, but it worked for me only after installing Plugin also. Yes, weird again. And one more thing - I was using IE from remote desktop and I did not have access to install anything there. So I just tried installing in my local and bingo, it worked! ) For users who use Citrix XenApp, there is an option in the 'desktop deck' to 'Optimize Flash Content' under 'Preferences'. Please check this also.

 

3. Now, the next possibility is that you do not have the sufficient access to BPC Web Login. For this you need to check with your Basis / Security team whether the following two roles are added to your profile in the backend (ECC / ABAP) system.

 

/POA/BUI_FLEX_CLIENT

/POA/BUI_UM_USER

 

These two roles are mandatory for accessing BPC via web.

 

4. After all these are ready, and you still can't access BPC means you are missing appropriate authorizations in the system. For example, you do not have sufficient 'Environment' access. For this, you need to check with your Basis / Security Admin to assign your access. This can be done from both the frontend (BPC) and backend (ABAP) systems.

 

Please go through the below document for Administration from backend:

http://scn.sap.com/docs/DOC-51460

 

Also, here are some links to the discussion forums which could help you more.

http://scn.sap.com/thread/3494475

http://scn.sap.com/thread/3494785

 

Hope this helps!

 

Regards,

Hari

BPC NW 7.5 with Citrix XenApp 6.5

$
0
0

Well.. I think BPC is the Application so if it works well in the desktop client, it should work well in the Citrix environment.

But the reality is not same as you expected. In other words, Customer will blame BPC instead of Citrix.

 

Recently, I worked with a customer for this issue and to make it a short word. BPC 7.5 SP15 will work with Citrix XenApp 6.5 with a couple of tweaking.

It was a long story but I don't want to write down all.

 

Anyway, here is the solution for XenApp 6.5.

 

1. Make separated user profiles when user is trying to use different SP version of BPC 7.5

2. Update ‘windows update of Citrix Server’ with latest version.

3. Apply latest Citrix hotfix (latest one is Rollup pack 3 that was released at December.2013)

4. Add a registry value for enabling Excel hook http://support.citrix.com/article/CTX133198

5. Apply Citrix note for preventing Restart\Shutdown Event by USER 32   http://support.citrix.com/article/CTX130179

6. Create whole new user profiles.

 

I hope this blog will help someone who is trying to migrate their Citrix environment from Xenapp5 to 6.5 because Citrix will stop supporting XenApp5  - actually, end of regular maintenance at July. 2014.

    James Lim

    Annual Operating Planning ( AOP) - SAP BPC 10.0NW

    $
    0
    0

    The P&L planning, CAPEX planning, Balance sheet Planning and Cash Flow Planning will form the basis for AOP.


    Data Flow :

     

    The proposed AOP (Annual operating Plan) is displayed in the graphic below. For AOP, an organization evaluates prior data for all aspects of the business, including revenue, production, capacity, expenses and so forth.


    AOP.png


    The process steps for AOP Planning are as following:

     

    1. AOP Process in SAP BPC will begin by loading previous year’s actual data from SAP BI ( marked with orange arrows) SAP BI will extract data from SAP ECC (HCM,CAPEX and Cost center data) and SAP APO DP (Sales volume data) applications. All the relevant data will be loaded in SAP BPC via SAP BI.

     

    2. HR planning will begin once the HR related data is available in SAP BPC. The Headcount Data Planning and Payroll Rate Planning will be available for each department to plan their respective headcount data at position/job and cost center level. This planned data will be an input to P&L Planning.

     

    3. Revenue planning will begin with Sales volume data available from SAP APO DP module in SAP BPC. Sales price will be planned by the Planners in the Sales Planning on the sales volume data. This will be an input to P&L Planning. ( SAP APO DP -->BW --> BPC)

     

    4. Cost Center Planning will begin once the cost center data is available in SAP BPC. The cost center manager/planner will plan for individual cost centers in Cost Center Planning template. This will be an input to P&L Planning (CCA-->BW-->BPC)

     

    5. CAPEX planning will be done on WBS (Work Break down Structure) elements level in Capex planning template. Master Data will be loaded from (ECC).

     

    6. CAPEX Planning will be done at the asset class level in the CAPEX Planning template which is available in SAP BPC. Planning Data will be submitted though the input form template and will be stored in BI Info Cubes followed by Actual CAPEX Data to be loaded is the BW Data Store to accommodate Actual Vs Plan.

     

    Depreciation planned data will be sent to P&L Planning

     

    7. P&L planning will be performed in P&L Planning template.

     

    8. Balance sheet planning will be performed in Balance sheet planning template. Both Cashflow and P&L Planning will provide input to Balance sheet Planning


    Data Model

    Environment and Models:

     

    An environment consists of one or more models and stores all the data from each model. Each model contains the master data that controls the data in the environment. Each model uses some or all of the dimensions within the environment.

     

    There will be one environment “AOP” that will include models defined in the following. The solution has the following models


    AOP1.pngAOP2.pngAOP3.png

     


    BPC75NW Transport a New Dimension in existing Application

    $
    0
    0
    In the past months we had to deal with a new business requirement: we had to add a new dimension in a productive Application implemented in a BPC75NW. This article deals with the procedure we decided to follow in order to preserve landscape consistence, in order to have a "transport controlled" system.

     

    In SDN we found no reference to a similar scenario (see BPC75NW Transport a New Dimension in existing Application), so we decided to resume and share our experience.

     

    CTS Correction and Transport System in BPC75NW does not allow to trasport single objects: change request are created via a specific Transaction UJBPCTR. About this topic see Life Cycle Management and SAP BusinessObjects Planning and Consolidation Version for NetWeaver.

     

    With BPC NW it's easy to move from a transport controlled ApplicationSet, to a scenario where direct changes in Prod or QA sys are allowed. This happens when business users are allowed to to make changes directly in Prod environment.

     

    Some days / weeks after initial transports a brand new ApplicationSet in the Prod environment can be totally different from the one in Dev environment: as a consequence CTS becomes much more that unusable: it becomes dangerous!

     

    In our scenario BPC75NW was managed via CTS (using the typical Dev>QA>Prod development landscape) and so we had to plan the steps to reduce time needed to apply such a modification ti target Application. Our primary aim was minimizing Applicaiton Set "offline" time.


    So we decided to clarify what was going to be included in transport: in order to include a Dimension in any Application it must contain at least one base member.

    But which member? We tried to analyze the so called "Shadow Table" UJT_TRANS_OBJ and noticed that, for any Dimension, UJBPCTR was including details on one base member (just filter table UJT_TRANS_OBJ with APPSET_ID = <your_appset> AND TABNAME = UJT_TRANS_MBR).


    UJT_TRANS_OBJ_01.jpg

    Suddenly we discovered that transported members were not the "first" base.

    According to us the transported member had to be the one with CALC = N and the lowest MBR__SEQ


    So we thought we had to work on table UJT_TRANS_MBR, to force the base member to be trasported.

    But then we discovered that UJT_TRANS_MBR is deleted and re-populated at each execution of UJBPCTR (see method COPY_PROD_DATA_TO_SHADOW of Class COPY_PROD_DATA_TO_SHADOW).


    CL_UJT_TRANSPORTS_MGR_01.jpg

    At the end we found that the base member to be trasported is choosen with this call in the same method:

    CL_UJT_TRANSPORTS_MGR_02.jpg

    So, before releasing the change request we did modify Shadow Table UJT_TRANS_OBJ in order to transport the desired base Member of the new dimension.

    To do this:


    a. find the ROWINDEX value in Table UJT_TRANS_OBJ corresponding to:


    TABNAME = UJT_TRANS_MBR

    ATTRIBUTE_NAME = ID

    DIMENSION = <your_dim>

     

    UJT_TRANS_OBJ_02.jpg

    b. In change mode edit FIELDVALUE corresponding to FIELDNAME = ATTRIBUTE_VAL


    We decided not to act on all properties of our Dimension: the most important thing to us was avoiding a "Move" DM Package execution in target system, because of the large amount of data.


    For safety reasons, we decided also to backup (TCode UJBR) in all target systems (QA and Prod) before importing mentioned change request, in order to have a "RollBack" point to go to, in case of errors). By luck it, Restore option was not necessary at all!



    Hope it helps


     


    How to Ensure that BPC Displays Only Uploaded Records for Planning

    $
    0
    0

    SAP BPC has been used for various planning application purposes- Demand Planning, Supply Planning, Sales Planning, Marketing Planning along with various financial planning processes like Cash  Flow Planning, Budgeting etc.

     

    We all know that BPC Model/Application/Cube supports Account Modelling. When a default report is created, you get all the possible intersections of master data available for planning. More often than not, most of these combinations are undesirable for planning and there are always ways identified to display the right plannable combinations. I am going to talk about a quick approach to achieve the same via an hypothetical scenario.

     

    Company XYZ Ltd deploys SAP APO for Demand Planning and to release the demand to supply department. However, they cannot have their on field sales representatives access the SAP APO system to enter forecast. Most of them do not have SAP GUI installed and the only way they can access and input plan numbers is via Excel files. This is where BPC comes into picture.

     

    SAP APO sends over the following records to enable Sales representatives to enter the forecast:

    ProductCustomerRegionCalendar MonthHistorical SalesRegional Manager ForecastSales Rep ForecastAverage Selling Price
    P1C1R1201309100EA120EA1.2USD
    P1C2R1201309110KG85KG20SGD
    P2C3R22013091000LTR1200LTR5INR

     

    This data is converted to account model after being extracted from APO in staging DSO/Cube and will appear as follows:

     

    Product(ZPRDCT)Customer(ZCUST)Region(ZREGION)Month(ZMONTH)Keyfigure(ZBPC_KF)ZUNITZSIGN_DATA
    P1C1R12013.09HISTORICAL_SALESEA100
    P1C1R12013.09REG_MANAGER_FORECASTEA120
    P1C1R12013.09SALES_REP_FORECASTEA0
    P1C1R12013.09AVERAGE_SELLING_PRICEUSD1.2
    P1C2R12013.09HISTORICAL_SALESKG110
    P1C2R12013.09REG_MANAGER_FORECASTKG85
    P1C2R12013.09SALES_REP_FORECASTKG0
    P1C2R12013.09AVERAGE_SELLING_PRICESGD20
    P2C3R22013.09HISTORICAL_SALESLTR1000
    P2C3R22013.09REG_MANAGER_FORECASTLTR1200
    P2C3R22013.09SALES_REP_FORECASTLTR0
    P2C3R22013.09AVERAGE_SELLING_PRICEINR5

     

    We will assume that we have a simple Planning model which will comprise of the following dimensions:

     

    Model Dimensions
    DATASRC
    PRODUCT
    CUSTOMER
    REGION
    KEYFIGURE
    TIME
    UNIT

     

     

    The mapping will be 1:1. The transformation file will be as follows:

     

    DATASRC=*STR(APO_UPLOAD)

    PRODUCT=ZPRDCT

    CUSTOMER=ZCUST

    REGION=ZREGION

    KEYFIGURE=ZBPC_KF

    TIME=ZMONTH

    UNIT=ZUNIT

    SIGNEDDATA=ZSIGN_DATA

     

    When you validate and process the transformation file, the number of records being processed will be shown as 12 which is expected. It is time to have a DM package to be created to load data from DSO into the model. Once DM Package is executed successfully, the log will show the submit count as 12.

     

    Now when you create an input schedule, you will get all sorts of combinations based on the members of the dimension:

    Blog.jpg

    This is quite an undesirable output as the planning combinations are already coming over from the planning systems and you do not need to have any additional planning combinations. You therefore could use following under "Options" to take care of those:

    Blog_sheet.jpg

    Also notice that the records against SALES_REP_FORECAST show blank and not zero. This means that BPC does not load zero records by default. As a result, on selecting the above option- you will have the following output:

    Blog1.jpg

    The output is undesirable in either of the case. It really bothered me and I started looking into various options and settings and got no clue until I found the following setting in SPRO in the underlying BW system and click on "Execute" icon:

    SPRO.jpg

    Enter the name of the environment in the next screen:

    ENV.jpg

    Click on the "Create" icon in the next screen as shown:

    Param.jpg

    In the next screen enter the parameter as follows:

    zero.jpg

     

    Click "OK" and save it. Relogin into EPM addin and refresh the data. The output would now show as follows:

    results.jpg

     

    Keep in mind that- these settings are at the Environment level and will be applicable to all the models in that environment. You can reset this setting by changing the value to "0" from "1".

    How to perform expert sizing for SAP BPC NW (Part 1 - Basic Steps and Approach)

    $
    0
    0

    I have been involved with multiple sizing engagements for customers and several large SAP BPC projects, one of the things that always struck me was the perception regarding sizing or the lack of information regarding these exercises. There seems to be some confusion on how custom or expert sizing is performed and how the sizing is determined or the process thereof in determining the sizing.

     

    This blog will hopefully shed some light on how to perform expert sizing.

     

    Please Note: It is the intention of this blog to explore the methodology regarding how to perform sizing and to provide some real world examples on how perform expert sizing which was done on past implementations. It is important to understand the process, but also to realize that in large implementations, there is always a need to engage with SAP on expert sizing. Due to the nature of large project implementations sizing becomes a critical factor for the successful implementation of a solution. This blog is meant to provide and overview of the sizing process, but doesn't replace the need for expert sizing, there are several factors that need to be taken into account that this blog unfortunately doesn't' cover..

     

    It is assumed that the reader has read the sizing guidelines for SAP BPC NW, this blog will make reference to these guides. These guides contain essential information which is required for expert sizing. It is highly recommended to always refer to the latest version of the sizing guidelines.

     

    Reference Link: SAP BPC NW Sizing Guidelines

     

    High Level Overview of the sizing methodology


    Step 1 - Some basics and foundation information

    First and foremost, you need to know what you are sizing and for which platform?. In addition there is also some basics that you need to take into account, For example: What phase in the project are you in, What are some of the non functional requirements and some of the environmental influencing factors.


    Some of the major non functional factors that need to be taken into account are:

     

    • Is this a shared solution ? : (For Example: Are you deploying SAP BPC NW on a shared BW system landscape, or a standalone isolated SAP BPC NW system landscape. This becomes more a TCO and Architectural discussion, but when performing sizing it is important to understand the constraints that you are deploying SAP BPC in)
      • If it is a shared SAP BW landscape, has SAP BW been sized correctly?, has the anticipated SAP BW usage been taken into account ?, At what point does it make sense to separate SAP BPC from the shared SAP BW landscape. Although some of these questions are subject to greater discussions and subject to architecture, project, organizational principles and governance, you will need to take them into consideration for your sizing exercise.

     

    • BW Platform : (What platform are you sizing for, this depends on the platform, there is different sizing requirements and formulas for the different platforms. [Classic BPC NW, this scenario is when SAP BW is on a classic RDBMS such as ORACLE, IBM DB2, MSSQL, etc. BPC on HANA, this scenario is when the SAP BW database platform is using SAP HANA as its database platform])

     

     

    Step 2 - Get some basic metrics and information

    In order to perform sizing, there is some basic and important information that is needed, without it; any form of sizing will not be possible:

     

    The sizing guideline will provide a complete list of information that you will need. Typically this information will be provided by the respective functional teams or solution architect.

     

    It is also very important to understand your user community, as this will tell you how your SAP BPC NW solution will be implemented through out the business. i.e: How many super users, how many input capture(s), etc. Based on past experience, I find it easier to categorize the user community and based on the number of the users in the categories you can determine the basic tasks and activities within SAP BPC,

     

    For Example:

     

    • Expert users - These are typically the users who are classified as 'Super Users'. These users in your SAP BPC user community normally have the most expert knowledge on SAP BPC. These users are typically the 'go to' users for your SAP BPC community and normally have the most intimate knowledge of the implementation environments and landscape. If you have a large number of expert users, then you will need to take into account, that some of these users will be writing there own script logic, processing dimensions, changing business rules, adding, changing, deleting business process flows, etc
    • Business Users - This user base, will form the majority on your SAP BPC user community, these users will probably use a combination of the EPM Add-In and web interface, along with SAP BusinessObjects reports. These users will typically be responsible for the process flows, data capturing, data modification,etc
    • Information Consumers - This user community is typically a smaller number of users, due to the fact that they would only really refresh reports, submit data via Input Schedules, etc

     

    Once you have an understanding of the user community and anticipated usage of the SAP BPC solution, it will be easier to understand the anticipated workload of the system. In addition to the above information some very critical information that you will also need is :

    • # Number of users
    • # Number of environments
    • Master Data integration and data architecture
    • Complexity of Application(s), Business Processes and Calculations
    • Data Architecture, Reporting Architecture( Are you going to be using SAP BusinessObjects Business Intelligence Platform for your reporting, Dashboards, etc)
    • Anticipated usage of application, i.e. Are the majority of your users going to be using the EPM Add-In exclusively, the web front end, What is the % of users going to be using BPF's, how complex are the BPF's going to be...

     

    Step 3 - Perform delta sizing from T-Shirt Sizing

    Once you have some basic metrics and information regarding the SAP BPC environments and applications, you can start to get an idea of the baseline category that your sizing will be. There are baseline T-Shirt sizing categories per the sizing guides, they offer a indicative baseline that will be a base for the sizing exercise.


    Its important to understand that the baseline sizing categories provide you with a estimated and baseline number of SAPS, based on the information provided by the respective functional teams you will be able to get an understanding of how much different your implementation will be.


    Step 4 - Expert Sizing and determine the delta SAPS 

    Once you have determined your baseline SAPS, your use the formula's outlined in the sizing guides to determine the delta SAPS and this will determine the anticipated number of SAPS required for your implementation. Although most of the steps outlined are pretty self explanatory and are in the SAP Sizing guidelines for the respective SAP BPC NW solutions, in Part 2, using some real world examples, some of this information will start to make more sense. 


    Sizing Approach for the respective environments


    The below graphic provides a visual representation of the sizing approach and methodology: This graphic is from a real world sizing exercise and shows the the approach taken for the respective environments.

    Sizing Approach.PNG


    Some Key Take away(s)

     

    • Sizing is based on inputs...  so the most basic principle applies ..[ Garbage in, Garbage Out ...]

    What ever your Functional teams, Solution Architect, project manger tell you, the values that they input, will have a direct and corresponding value on the sizing. For Example: If your functional team leads says that your SAP BPC solution will be deployed to 1400 users, then you have to size according to 1400 users, this has a direct financial cost, because you will need a system that can handle 1400 users, etc


    • Understand your SAP BPC community

    The more you understand your SAP BPC user community, the easier it becomes to manage and understand your workload. Different activities within SAP BPC have different behaviors on the system. The more you understand your community, the easier it becomes to mitigate and manage.


    • Sizing is a small but very critical portion of the project.

    Understand that there are other factors that drive the adoption, deployment and ultimately the success of the SAP BPC solution.

     

    • Expert sizing should provide you with a comfort factor of about 70 -80 % on your platform and environment.

    Performance testing should provide you with clarity on if you've over sized, undersized your environment. Ultimately, performance testing should identify the sweet spot on your environment and solution landscape.

     

    • Sizing is an iterative exercise

    Its very important to understand that sizing is an iterative approach and that once you've performed expert sizing, it doesn't mean that your done and dusted. As with every solution and implementation. It is very important to realize that expert sizing should provide you with a comfort level of 70-80% on what your landscape and environment should be, but performance testing will verify and provide the opportunity to find that sweet spot of your environment. For Example: You've undersized or over sized, etc.

     

    In Part 2 of this blog series, I will provide a real life example of an expert sizing exercise for a large SAP BPC implementation

    How to perform expert sizing for SAP BPC NW (Part 2 - Example)

    $
    0
    0

    In Part 1 of this blog series, I explained the basic steps and approach on how to perform expert or custom sizing for a SAP BPC NW solution. In Part 2, I will provide an example output of the sizing exercise based on a real life implementation. It is hoped that this will provide you will a base and sufficient information in order to understand the process and methodology on how expert sizing is done.

     

    Example: Basic Information and Context to the sizing example

    This section will detail and provide some information, it is hoped that it will provide context to the information in the attached SIZING document, which is attached in this blog below.

     

    This example is based on a real world SAP BPC NW implementation, it was for a very large company in the financial services sector and was part of a larger ERP implementation project. This project was implemented using the standard SAP ASAP methodology and at the time of writing this blog is in the realization phase. Custom sizing was required in order to setup and procure the necessary environments for the project implementation.

     

    In Part 1, I detailed some of the steps that was taken in order to understand the base requirements and the approach in which the number of SAPS was determined. In many expert sizing exercises, SAP determines the number of SAPS that the customer or hardware vendor needs in order to size a system landscape. SAP does not make any recommendations with regards to the sizing of a system, but provides the number of SAPS needed in order for a hardware vendor or systems integrator to size a system. In this example, this hardware platform is the IBM Power Series Server platform. We worked in conjunction with the hardware vendor IBM to firstly establish certain assumptions (i.e. How many SAPS per core, etc) and then to establish the logical system layout in accordance with the require number of SAPS. In order to determine how many SAPS per core, you will need to consult with your hardware or virtualization vendor. The number of SAPS per core is determined as per the SAP SD benchmark. It is recommended to always consult with your hardware or virtualization vendor to get the estimated number of SAPS per core. This will give you an idea of how big your system landscape will be, and the logical system layout will be.

    For Example: Make reference to Figure 8 for a logical system layout

     

    Reference Links:

    http://global.sap.com/campaigns/benchmark/index.epx

    https://service.sap.com/quicksizer

     

    Working through the SIZING document and its sections

     

    Download link to SIZING guide

    https://db.tt/81a5Muue

     

    Although this example is based on the IBM Power Series Server platform, you can use the same approach and methodology to size for any hardware platform, or virtual platform.

     

    Please Note: When sizing for a virtual platform, it is imperative to understand the constraints of a virtual environment and to understand SAP's policies on support regarding these platforms. It is also imperative to work in conjunction with the hardware or virtualization vendor to ensure that the most optimal environment landscape configuration is designed and configured.

     

    Based on the discussions with the functional teams and solution architects, I was able to determine that the proposed SAP BPC NW solution fell within the CATEGORY 3 defined in the sizing guideline and was going to be a complex implementation. Once we have established the base category that this implementation will fall into, we are able to establish the baseline number of SAPS. This figure is the base on which we will then determine the delta from.

     

    The sections below will provide reference to the sections in the attached SIZING document. This is a sample output from a expert sizing exercise, and provides the number of SAPS for the proposed SAP BPC NW solution implementation. The process typically is followed with the preferred hardware vendor in order to verify and design the system landscape in accordance with the SAPS requirements.

     

    Step 1 - Enter the number of users for the anticipated SAP BPC solution implementation

     

    In the attached SIZING document, the section *User Community Information is used to determine the user base and the number of concurrent users for your sizing exercise.

     

    Figure 1 - Screen shot of the section *User Community Information is SIZING document

    UserCommunityInfo.PNG

    Step 2 - Determine the performance factors and influences

     

    In the next section of the document, it details the performance influences and considerations that need to be taken into account when performing a sizing exercise.

     

    Figure 2 - Screen shot of the section performance influencing factors

    PerformanceInfluences.PNG

     

    Step 3 - Enter the 'information in the What If' scenarios in the SIZING document.

    The rationale for this section, is due to the nature of SAP BPC NW implementations, the design is subject to change, and in the sizing it is important to cater for what if scenarios. If your implementation the information is not subject to change, then you don't have to use the 'What If' Scenario's section.

     

     

    Figure 3 - Screen shot of the 'What If' scenario scenario in the SIZING document

    Scenarios.PNG

     

    Step 4 - Determine the delta values from the baseline category

    In this step, you determine the delta or deviation from the baseline category. The colors are a high level heat map to indicate how much of the baseline is relevant to your specific implementation. The baseline SAPS are also used in order to determine the base configuration and what is your initial starting point for your sizing exercise. In this example, the number of SAPS indicates that this will be a large implementation with a large system footprint.

     

    Figure 4 - Screen shot of the delta values section in the SIZING document

    BaselineInfo.PNG

     

    Step 5 - Determine the delta number of SAPS from the baseline number of SAPS

     

    Once you have established what is the delta values, you use the formulas in the sizing guides to determine how much SAPS is required in order to cater for the delta from base. For Example: The baseline in the attached SIZING document is CATEGORY 3, with 300 concurrent users. In the baseline category, they provide the number of SAPS it takes for 30 users to run a Business Process Flow. But we know that in our implementation we estimate that there will be around 190 users who are running Business Process Flows concurrently, so that means we need to determine how many SAPS it will take to run 160 users.

     

    Figure 5 - Screen shot of the 'delta number of SAPS calculation section

    DelatCals.PNG

     

    Step 6 - Add the Delta to the Baseline SAPS

    Once you have calculated the number of SAPS is required for the delta values, then you add the baseline number of SAPS to the delta number of SAPS to come an estimated number of SAPS.

     

    Figure 6 - Screen shot of the *Estimated number of SAPSEstimatedSAPS.PNG

     

    The worksheet SAPS in the SIZING document is the final number of SAPS that is required for the SAP BPC NW implementation. Once you have an estimated number of SAPS, you need to consult with the hardware / virtualization vendor in order to build and size the system accordingly.

     

    Figure 7 - Screen shot of the 'SAPS worksheet in the SIZING document

    SAPS.PNG

     

    In this real life example, it is a large and complex implementation with over 3000 SAP BPC users. The number of SAPS is very high number and you will need the expert guidance of the hardware vendor in order to determine the preferred logical system layout.

     

    Figure 8 - Logical System Layout

    Logical System Layout.PNG

     

     

     

    Final Thoughts

    Sizing can be a complex and complicated exercise due to the nature of a SAP BPC NW implementation. The process with sizing is extremely important, as it provides a base for your implementation. It is also important to understand the process which is involved when performing expert sizing. In this example, there was several engagements with the hardware vendor, in order to establish the most optimal configuration and to determine the estimated number of SAPS per core.

     

    It is also important to understand that although sizing can be a pretty scientific exercise, there can be no substitute for performance and stress testing.

     

    I hope that this blog series sheds some light on how to perform expert or custom sizing. Your comments and suggestions are welcome

    Dramatically reduce Lite Optimize run time

    $
    0
    0

    Hi all

     

    I've been looking at the Lite Optimize process for some time now and would like to share some further knowledge to help you tweak the Lite Optimize process chain in order to reduce the amount of time your model is out of planning mode and, in turn, prevent users from being able to save data or return dodgy data.  SAP Note 1649749 talks about this issue.

     

    I recently wrote a blog on Lite Optimize - A little guide to the big things you need to know which looks at various ways you can improve and use the Lite Optimize process effectively.  This article looks at a new approach solely aimed at reducing the amount of time your model is sitting outside of planning mode.  This, in no way, aims to put down the original design of the underlying Lite Optimize process chain but merely gives you an option if, like ourselves, you find yourself with a 40 minute period where users cannot save data.  In an International organisation, this can be a pain!  One thing I will put down though is the current inability for BPC to inform users that saving is not possible - a blank save confirmation screen is not acceptable and many of our users have left the Input Schedule thinking data has been saved after much time spend inputting data!  I will also recommend an improvement to this awareness in the post.

     

    Reduce time model is out of plan mode


    I'll get straight to the point, as you probably already know, users are unable to save back to the model during the running of the Lite Optimize process and reports on the data can produce inconsistent results.  The process can run for various amounts of time depending on the number of unprocessed requests waiting to be optimised in the model.  For our organisation, we run the process once a day after the overnight data loads - it takes around 40 minutes.

     

    The change I am suggesting here is quite simple.  I am suggesting that you consider moving the 'BPC: Create Statistics' step to after the step where the cube is put back into plan mode.  For us, the process of updating the statistics accounts for 85% of the running time for the overall chain.

     

    Untitled.png

    It isn't the first time this technique has been suggested, a blog post in January 2013 suggested 2 processes could be run in parallel (1 which starts the statistics, and another which pauses the chain at that point for 3 seconds before then moving the model back into planning mode while the statistics job continues to run).  Credit is due here for the high level idea but the technical suggestion was incorrect.  Firstly, concurrent processing is not supported in chains run from BPC.  Also, and very key, the cube does not need to be in plan mode when the 'Create Statistics' step kicks off.

     

    The reason SAP designed the chain like this (I assume) is so the end user is only able to write back to the cube once performance is at optimum levels.  The question you have to ask yourself before implementing this change is "Am I willing to allow users to write back and use the model for a period after the cube is switched back to planning mode and while the statistics are still updating?"  For us, performance doesn't really suffer too badly and we're far more concerned about reducing our down-time.

     

     

    Improve awareness of inability for users to save

     

    Regardless of whether you choose to implement the above, it remains the case the BPC does not let a user know when a Lite Optimise is running.  Unless your users are diligent enough to check the 'View Status' for run packages before every save (I very much doubt it) then it is quite likely that anyone attempting to save data will not know that it has been unsuccessful. 

     

    Although the following change does not fully prevent this problem, it does go some way to help avoid it.

     

    I have created a program which automatically switches the BPC environment online and offline whenever it is executed.  I recommend that this step is added to the Lite Optimize process chain when the cube is taken out of planning mode and then again once it is put back.  Any user not already in the system at this point will be prevented from accessing until the cube returns to planning mode.  An informative message is also set to make the user aware of this. 

     

    Unfortunately, anyone already in the system will be blissfully unaware that their next save could be doomed.  It is also quite common for reports to return incorrect data during the optimisation process (see SAP note linked in the initial section)

     

    Here is the code:

     

    Untitled.png

     

    I hope this post might help someone or at least provoke some suggestions on other ways to get around the problems I have described.

     

    That's all for now.

     

    Ian

    Full Optimization and Light Optimization In Detail.

    $
    0
    0

    When you create new environments and models, only a small amount of data exists. Since the amount of data you maintain grows over time, we recommend you periodically run the optimize function to improve performance.

    There are two different types of optimization available:

     

    • Light Optimization:

     

    Closes the open request, compresses without Zero-Elimination and indexes the cube, and updates database statistics for the BW InfoCube.

     

    Background (ref SAP 1551154)

    • Is a maintenance process similar to the BW InfoCube performance maintenance tasks of index rebuild, statistics rebuilds, and compression?
    • This process deletes the infocube indexes and moves records from the F to the E fact tables.
    • This process compresses/collapses the records with the same key
    • Additional zero suppression is possible (manually adjusted in process chain)
    • Updates the infocube statistics

     

    How often should it be run? (ref SAP 1508929)

    • There is no set recommendation as to the frequency, however it is generally recommended that these (index/statistics) tasks be executed after the load process to provide satisfactory performance for reporting and input schedule updates.
    • From 1508929 “There is no rule of thumb for how often to run optimizations. The need can vary depending on the characteristics of your hardware environment and your application.”

     

    As best practice:

    1. 1. When new application sets and applications are created run a Full Optimization.
    2. 2. Lite Optimization doesn’t take the system offline, and can be scheduled during normal business activity (e.g. after a data load)
    3. 3. Full Optimization needs the system to be offline so they should be run at down-time periods – for example after a month-end close.

     

     

    Background (ref SAP 1551154)

    • This process essentially rebuilds the application in its entirety. The process creates a new application and copies the data to the new application.
    • This process is a RESTRUCTURING of the BPC application data model.

     

     

     

     

    • Full Optimization:

    Will perform the same operations as Light Optimization, but will also check the NetWeaver BW data model. If the data model can be improved, Full Optimization will do so, and this could take a long time to run (for cubes with large data volumes).

    Full Optimization will check whether the data model of the BW Cube is built appropriately. Specifically, it will check:

    • Dimension tables have < 20% size of fact table

    • As many line item dimensions are used whenever possible If the cube structure can be optimized, then     it will:

    • Take Environment Offline

    • Create a Shadow Cube with Optimal Data Model

    • Move data to shadow cube, delete original cube

    • Close Open Request, Compress and Index the Cube, Update Database Statistics

    • Bring Environment back online.

    Because this is technically going to be new cube, any changes that had been made to (or surrounding) the underlying InfoCubes will be lost, which means that:

    • Data Transfer Processes (DTPs) to/from the Cube

    • Aggregates or Business Intelligence Accelerator (BIA) Indexes will be lost.

    Both optimization options require that the environment is offline.

    Be careful running a Full Optimization, since significant system resources are used in the process.

    You can choose to perform either a light optimization or a full optimization.

    It is possible to execute the optimization online or via Data Management Package

     

    Model Optimization in MS Version.

     

    Data is stored in models on the following levels:

    • Real-time (Write Back table): corresponds to the most current data sent to the system using the input forms.

    • Short-term (FAC2 table): corresponds to the data created or loaded using the Data Manager packages.

    • Long-term (FACT table): corresponds to the FACT table, which offers better performance.

     

    You periodically optimize your models to improve system performance. The optimization options are as follows:

    Optimization Options:


    Optimization

    Options

    Description
    Lite Optimize

    Clears the real-time data storage and moves data to the

    short-term data storage.

    This option does not take the system offline.

    Incremental

    Optimize

    Clears the real time and short term data storage and moves

    data to the long-term data storage.

    This option takes the system offline.

    Full Optimize

    Runs an incremental optimization and processes the

    dimensions.

    Compress

    Database

    Sums multiple identical entries into a single record.

    Index

    Defragmentation

    This option forces a reindex of the database. This option

    may take a long time.



    For example, you can run Lite Optimize every day, and Incremental Optimize every week.

    Caution: When you create a new model, it is best practice to run a full optimization.

    Note: You can have an automatic reminder to optimize the model. This reminder appears when you log on to the Planning and Consolidation Administration. You can define the number of records (for example, 50,000 records) to be reached before the reminder appears in the Model Parameters.

     

     

     

     

     

     

     

     

    References:

     

    http://www.claricent.com/

    http://scn.sap.com/

    BPC410 SAP Business Objects Planning and Consolidation

    BPC430 SAP Business Objects Planning and Consolidation

     

     

    Thanks and Regards,

    Saida Reddy.G

    Viewing all 152 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>