Wednesday, December 22, 2010

Stage Copy Solution for Faster Dev and Sandbox system copies

CAUTION: PLEASE DO NOT COPY AND PASTE ANY OF THE COMMANDS.

Environment: RHEL 2.6 on HP Proliant Blades, Oracle 10.2.0.4, SAP R3

The idea is to restore the online backup of DEV onto SND.
The systems in question are source DEV and target SND.

Because of time constraint we had to go for an efficient backup/restore method. And we have opted for stage_copy method.

The following procedure describes the stage copy method.
Online backup profile - initDEV_online.sap
Create a new profile based on the online backup profile for stage_copy and name it as initDEV_online_stage.sap

vi initDEV_online_stage.sap and set/modify the following parameters
backup_mode = all
restore_mode = all
backup_type = online
backup_dev_type = stage
backup_root_dir = $SAPDATA_HOME/sapbackup
stage_root_dir = /oracle/SND/sapbackup
compress = no
compress_dir = $SAPDATA_HOME/sapreorg
archive_function = save
archive_copy_dir = $SAPDATA_HOME/sapbackup
archive_stage_dir = /oracle/SND/sapbackup
new_db_home = /oracle/SND
stage_db_home = /oracle/SND
remote_host = fctfrmSND
remote_user = ‘oraSND test123’ (can be hashed in case of ssh)
stage_copy_cmd = scp
exec_parallel = 2

Preparation of filesystems in SND (target)
        1.       Validate the mount points and their permissions.
        2.      Clean up all sapdata filesystems and compare the sizes between DEV and SND
        3.      Clean up all origlog* and mirrlog* filesystems.
        4.      Clean up saparch, oraarch, sapreorg, sapbackup filesystems.
        5.      Disable the archive log and online backup scripts in the cron on both and DEV and SND.
        6.      Stop SAP and Oracle on SND (target)
        7.      cleanipc –XX remove and also remove any remaining shared memory segments using ipcrm
        8.      kill all the processes running with SNDadm and oraSND
Initiate online restore from DEV to SND using stage_copy profile
  Ø       Ensure ssh is setup on SND (target) for the user oraSND and oraDEV.
  Ø       Login to DEV as oraDEV and issue the following command.
  Ø       brbackup -p initDEV_online_stage.sap -t online -c -u / &
  
 Upon successful completion of the brbackup, the online backup of DEV is restored into the directories of SND system. Now, we need to transfer the archive log files from DEV to SND depending upon PITR requested (Point-in-Time recovery). Use sftp from DEV to SND and copy the required archive logs to /oracle/SND/oraarch. And rename them from DEV* to SND*.

Restore archive log files from tape
  Ø  If the required archive log files are copied to tape and are not available in oraarch then issue the following command.
  Ø  brrestore -a 9921-9929 -p initDEV_arch.sap


Enable archive log and online backup scripts on DEV server.

Control file creation
  Ø  Login to SND and rename the existing control files.
o    mv /oracle/SND/origlogA/cntrl/cntrlSND.dbf /oracle/SND/origlogA/cntrl/cntrlSND.dbf.old
o    mv /oracle/SND/saparch/cntrl/cntrlSND.dbf /oracle/SND/saparch/cntrl/cntrlSND.dbf.old
o    mv /oracle/SND/sapdata1/system_1/cntrl/cntrlSND.dbf /oracle/SND/sapdata1/system_1/cntrl/cntrlSND.dbf.old
  Ø  Login to DEV and create the control file trace
  Ø  alter database backup controlfile to trace; replace DEV with SND;
  Ø  CREATE CONTROLFILE SET DATABASE "SND" RESETLOGS NOARCHIVELOG
o    SQL> @controlSND.sql
o    ORACLE instance started.
o    Total System Global Area 2600468480 bytes
o    Fixed Size                  2086192 bytes
o    Variable Size            1308625616 bytes
o    Database Buffers         1275068416 bytes
o    Redo Buffers               14688256 bytes
o    Control file created.
Recovery Steps
  Ø  Now, the database is in mount state. Issue the following command to start the recovery.
  Ø  recover database using backup controlfile until time ‘2010-11-17:09:00:00’;
o    SQL> recover database using backup controlfile until time '2010-11-17:09:00:00';
o    ORA-00279: change 4326224613 generated at 11/17/2010 08:18:44 needed for thread 1
o    ORA-00289: suggestion : /oracle/SND/oraarch/SNDarch1_9930_721386457.dbf
o    ORA-00280: change 4326224613 for thread 1 is in sequence #9930
o    Specify log: {=suggested | filename | AUTO | CANCEL}
o    AUTO
o    Log applied.
o    Media recovery complete.
  Ø  Open the database using the following command.
o    alter database open resetlogs ;
o    lsnrctl start LISTENER_SND;
Post Recovery Steps
  Ø  Download and execute ORADBUSR.SQL (SAPNote: 50088 )with the following command
  Ø  sqlplus /nolog @ORADBUSR.SQL SAPR3E UNIX SND X
  Ø  Add the temp datafiles.
o    ALTER TABLESPACE PSAPTEMP ADD TEMPFILE '/oracle/SND/sapdata1/temp_1/temp.data1' SIZE 5000M REUSE AUTOEXTEND OFF;
o    ALTER TABLESPACE PSAPTEMP ADD TEMPFILE '/oracle/SND/sapdata3/temp_2/temp.data2' SIZE 5000M REUSE AUTOEXTEND OFF;
  Ø  Run the statistics
o    brconnect -u / -c -f stats -t oradict_stats
o    brconnect -u / -c -f stats -t system_stats
o    brconnect -c -u / -f stats -t all -f collect -p 4
  Ø  Enable the archive log for SND oracle database.

Ensure TSM backups are running fine and enable the archive log and online backup scripts.

Friday, August 20, 2010

Oracle Block Corruptions


 There are different layers where a corruption can occur.

1. OS layer
2. DB layer
3. SAP Pool / Cluster table layer
4. Application layer

The best way to begin the investigation is through File system check.
SAP pool or cluster table layer check can be done with the tool R3check.

DB layer corruptions
A corrupted block can occur in datafiles or in redologfiles.

Consistency check can be performed with the following tools.

1. dbverify - checks if the database block fulfills the predefined rules
2. export - reads the data stored in table blocks.
3. analyze - reads table and index data and performs cross checks.

Note: If using RMAN for backing up the database, an implicit consistency check for blocks saved is performed. Explicit dbverify check is not needed.

The above 3 methods can only be used for datafiles.
Corruptions in redolog files can only be found when applying the redologs during recovery. If you dump a redolog file and do not get an error, this is not sufficient to prove that the redo log is ok.

We basically dump the output of redo log files in a trace and then read the trace file to understand the content. Below are some of the useful command.

The following ways of dumping a redo log file are covered

1. To dump records based in DBA (Data Block Address)
2. To dump records based on RBA (Redo Block Address)
3. To dump records based on SCN
4. To dump records based on time
5. Dump the file header information
6. Dump an entire log file

1. To dump records based on DBA  (Data Block Address)

Connect to database using sysdba and execute the below command
ALTER SYSTEM DUMP LOGFILE ‘filename’  DBA MIN (fileno) (blockno) DBA MAX (fileno) (blockno);

Example:

ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ DBA MIN 5 . 31125 DBA MAX 5 . 31150;
This will cause all the changes to the specified range of data blocks to be dumped to the trace file.  In the example given, all redo records for file #5, blocks 31125 thru 31150 are dumped.

2. To dump records based on RBA (Redo Block Address)

This will dump all redo records for the range of redo addresses specified for the given sequence number and block number.

Syntax:
ALTER SYSTEM DUMP LOGFILE ‘filename’ RBA MIN seqno blockno RBA MAX seqno blockno;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ RBA MIN 2050 13255 RBA MAX 2255 15555;

3. To dump records based on SCN

Using this option will cause redo records owning changes within the SCN range
specified to be dumped to the trace file.

ALTER SYSTEM DUMP LOGFILE ‘filename’ SCN MIN minscn SCN MAX maxscn;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ SCN MIN 103243  SCN MAX 103294;

4. To dump records based on time

Using this option will cause redo records created within the time range specified to be dumped to the trace file.
ALTER SYSTEM DUMP LOGFILE ‘filename’ TIME MIN value TIME MAX value;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ TIME MIN 299425687 TIME MAX 299458800;

Please Note: the time value is given in REDO DUMP TIME

5. Dump the file header information

This will dump file header information for every online redo log file.

alter session set events immediate trace name redohdr level 10;

6. Dump an entire log file:

ALTER SYSTEM DUMP LOGFILE ‘filename’;

Please note: Fully qualify the filename, and include the single quotes.

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’;

Note: Please know that brconnect -f check is not a consistency check. This is only for left freespace, parameter settings, last successful backup etc.

A proper consistency check can prevent data loss because of the corruption. If you check sufficiently often for corruptions you can restore the file containing the corrupted blocks from a backup that does not contain the corruption and recover up to the current point in time. It is quite unlikely that the same corruption is in the archive logs too.

Recommendation: Check the database for consistency at least once a week.

Analyze and export reads the blocks to be checked into the SGA which is why the buffer quality of the DB block buffer is adversely affected for a short while.

Analyze – checks both tables and indexes and may run for long time. Better to run this when the system has low workload. Buffer quality is adversely affected.
Export – Only checks tables. Performance loss as a result of export processes. Certain kinds of corruptions are exported without an error.
If you later try to reorganize this type of table (export/import), you will have problems during import. Buffer quality is adversely affected.

Dbverify – feasible in running operation, this checks for table and indexes and also the blank DB blocks. Doesn’t checks the cross references. Reads the blocks without loading them in the SGA. So, buffer quality is not affected. Dbverify can be run on the data files restored from the backup (without these files having to belong to a DB). Also checks the data in the LOB columns.

To be continued...

Tuesday, August 17, 2010

Oracle Block Corruptions


 There are different layers where a corruption can occur.

1. OS layer
2. DB layer
3. SAP Pool / Cluster table layer
4. Application layer

The best way to begin the investigation is through File system check.
SAP pool or cluster table layer check can be done with the tool R3check.

DB layer corruptions
A corrupted block can occur in datafiles or in redologfiles.

Consistency check can be performed with the following tools.

1. dbverify - checks if the database block fulfills the predefined rules
2. export - reads the data stored in table blocks.
3. analyze - reads table and index data and performs cross checks.

Note: If using RMAN for backing up the database, an implicit consistency check for blocks saved is performed. Explicit dbverify check is not needed.

The above 3 methods can only be used for datafiles.
Corruptions in redolog files can only be found when applying the redologs during recovery. If you dump a redolog file and do not get an error, this is not sufficient to prove that the redo log is ok.

We basically dump the output of redo log files in a trace and then read the trace file to understand the content. Below are some of the useful command.

The following ways of dumping a redo log file are covered

1. To dump records based in DBA (Data Block Address)
2. To dump records based on RBA (Redo Block Address)
3. To dump records based on SCN
4. To dump records based on time
5. Dump the file header information
6. Dump an entire log file

1. To dump records based on DBA  (Data Block Address)

Connect to database using sysdba and execute the below command
ALTER SYSTEM DUMP LOGFILE ‘filename’  DBA MIN (fileno) (blockno) DBA MAX (fileno) (blockno);

Example:

ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ DBA MIN 5 . 31125 DBA MAX 5 . 31150;
This will cause all the changes to the specified range of data blocks to be dumped to the trace file.  In the example given, all redo records for file #5, blocks 31125 thru 31150 are dumped.

2. To dump records based on RBA (Redo Block Address)

This will dump all redo records for the range of redo addresses specified for the given sequence number and block number.

Syntax:
ALTER SYSTEM DUMP LOGFILE ‘filename’ RBA MIN seqno blockno RBA MAX seqno blockno;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ RBA MIN 2050 13255 RBA MAX 2255 15555;

3. To dump records based on SCN

Using this option will cause redo records owning changes within the SCN range
specified to be dumped to the trace file.

ALTER SYSTEM DUMP LOGFILE ‘filename’ SCN MIN minscn SCN MAX maxscn;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ SCN MIN 103243  SCN MAX 103294;

4. To dump records based on time

Using this option will cause redo records created within the time range specified to be dumped to the trace file.
ALTER SYSTEM DUMP LOGFILE ‘filename’ TIME MIN value TIME MAX value;

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’ TIME MIN 299425687 TIME MAX 299458800;

Please Note: the time value is given in REDO DUMP TIME

5. Dump the file header information

This will dump file header information for every online redo log file.

alter session set events immediate trace name redohdr level 10;

6. Dump an entire log file:

ALTER SYSTEM DUMP LOGFILE ‘filename’;

Please note: Fully qualify the filename, and include the single quotes.

Example:
ALTER SYSTEM DUMP LOGFILE ‘u01/oracle/V7323/dbs/arch1_76.dbf’;

Note: Please know that brconnect -f check is not a consistency check. This is only for left freespace, parameter settings, last successful backup etc.

A proper consistency check can prevent data loss because of the corruption. If you check sufficiently often for corruptions you can restore the file containing the corrupted blocks from a backup that does not contain the corruption and recover up to the current point in time. It is quite unlikely that the same corruption is in the archive logs too.

Recommendation: Check the database for consistency at least once a week.

Analyze and export reads the blocks to be checked into the SGA which is why the buffer quality of the DB block buffer is adversely affected for a short while.

Analyze – checks both tables and indexes and may run for long time. Better to run this when the system has low workload. Buffer quality is adversely affected.
Export – Only checks tables. Performance loss as a result of export processes. Certain kinds of corruptions are exported without an error.
If you later try to reorganize this type of table (export/import), you will have problems during import. Buffer quality is adversely affected.

Dbverify – feasible in running operation, this checks for table and indexes and also the blank DB blocks. Doesn’t checks the cross references. Reads the blocks without loading them in the SGA. So, buffer quality is not affected. Dbverify can be run on the data files restored from the backup (without these files having to belong to a DB). Also checks the data in the LOB columns.

To be continued...

Cluster table diagnosis using R3check utility

Use this utility to check whether a cluster table contains incorrect cluster records.
 If a termination occurs when you process a cluster table in productive operation, you can find information about the cluster table in which the problems occur in the corresponding system log and short dumps.
Execute the program R3 check for the diagnosis, which generates an output line with the key of the affected record for every defective cluster record. Create a control file for this on operating system level with the following contents.
export file='/deve/null' dumping=yes client=CLIENTID
select * from log.clustertable

and start the check as adm user
R3check controlfile > outputfile

The lines in the output have the following structure:
phys. cluster table:  key1*key2*key3... (error number)
To check a physical cluster table, call R3check for a logical cluster table which belongs to the corresponding physical cluster table.

If after consulting the SAP Hotline on how to proceed in dealing with the error, you need to export the data belonging to a certain cluster key, generate an additional control file using the following contents:
export file='datafile' client=nnn
select * from phys. cluster table where KEY1 = key1 and KEY2 = key2
and ...
and start the export as adm user
R3trans  controlfile
data file for the name of the data file, into which the exported data is to be written.

Monday, August 16, 2010

JMS Message Monitoring in CE 7.1 system

In this article we reviewed the available Telnet commands related to the SAP JMS Provider.
how you can use these commands in combination to investigate a particular problem or just to
monitor the runtime status of the JMS Provider.

Telnet Commands Related to the SAP JMS Provider
The SAP JMS Provider service provides Telnet commands for administering and monitoring the JMS
resources in a server. To use these commands, you have to connect to the AS Java using Telnet and enable
the JMS commands group with add jms. To show the help information about the available commands
under the jms group, you can type one of the following at the Telnet command prompt:
􀁸 jms –h
􀁸 jms -?
􀁸 man jms

There are several subgroups of JMS Telnet commands:
􀁸 jms_list – this command provides details about the JMS runtime environment
􀁸 list_temp_destinations – this one displays all currently active temporary destinations.

Example Scenario
All JMS-related Telnet commands give you the possibility to monitor the runtime state of the SAP JMS
Provider. To illustrate the usage of the described Telnet commands, let us consider one simple scenario.
Imagine we just discovered that some persistent messages are not delivered to our application and we want
to investigate what might have happened with them. For the purposes of this example, we will use the
sapDemoQueue destination and the default JMS Virtual Provider.

The following procedure describes one possible path of investigation and the respective sequence of
commands.

telnet to the java server
telnet localhost 50008
Administrator / password
command >> jms add

jms_list msg <> default
This command lists all messages sent to the 
<> destination that are present in the databae. If there are no messages in this list, we know that there are currently no messages pending for delivery either no messages have been produced, or all that have been produced have already been consumed and acknowledged. We can try to determine which producer was supposed to send them.

jms_list producers default
This command lists all producers registered to destinations belonging to the default JMS virtual provider. Note that this is the same virtual provider to which our destination belongs. From this list we can determine the producer ID, the destination to which the producer sends messages, its session ID and client ID. By the client ID, we can later on find out the consumer that is supposed to receive the messages. In this case, we look for producers registered that is supposed to receive the messages. In this case, we look for producers registered to the
<> destination. This is a way to determine if there is a currently active producer registered to our destination.

If there are messages pending to be delivered, then we have to continue our investigation with consumers that are expected to receive them. We can check the status of the JMS connection - how many bytes have been sent and received through it and when it was last accessed.

jms_list connections default
We use the client ID to check if there are any active connections and when for the last time was particular connection accessed. The JMS Virtual prodiver again has to be the same.
Note: If you want to find the corresponding connection to your consumer, you need the connection with client ID that is equal to the one of the already found consumer.

We can also check the status of the consumer registered to the
<> destination.

jms_list consumers default
This command lists all currently active consumers registered to destinations belonging to the default JMS virtual provider. From this list we can determine the consumer ID, the destination to which the consumer is registered, the session ID and the client ID. If there is no consumer registered to
<> then we know that our application does not receive messages because it failed for some reason to create the respective consumer and we can continue the investigation in this direction, for example by checking the server traces for relevant exceptions.

If there is an active consumer but it still does not receive any of the pending messages, it is possible that there is an issue in the application message processing logic which causes the messages to be redelivered again and again. By default, message delivery attempts are limited and once they are exhausted for a particular message, it is considered undeliverable (dead) and it is skipped by the consumer and moved to configured error destination of the original destination. To determine the error destination of the
<> destination, we have to use the configuration edition. In the Display configuration tab, expand Configurations -> jms_provider -> default -> queues -> <> -> Propertysheet data. In the Property Sheet you can find the error destination of a particular destination. In our case, the error destination of <> is sapDefaultErrorQueue.

Then, we can check if there are any messages in the error destination.

jms_list msg sapDefaultErrorQueue default
With this command we can check if the missing messages are present in the error destination.

If our application is unable to consume some of the messages, we have to check why and then we may want to do something with the undelivered messages. Since error destinations are just ordinary JMS destinations, you can access dead messages using the standard JMS API - for example, your application (or a dedicated tool) can consumer and process the messages from the error destination - it can even return them back to the original destination, if that is the error handling login of the application.

Note that we can configure the following properties on the jms-resources.xml related to the dead messages functionality.

a. deliveryAttemptsLimited - a Boolean property that indicates whether the message delivery
attempts are limited. The default value is "true".
b. maxDeliveryAttempts - an Integer property that indicates the maximum number of delivery
attempts before the message is considered undeliverable (dead). The default value is 5.
c. deliveryDelayInterval - the delay in milliseconds between two consecutive message delivery
attempts. The default value of this property is 2000 milliseconds.
d. errorDestination - the name of a JMS Queue where dead messages will be forwarded. If you
leave this property blank (“”), this means that you want dead messages to be discarded.

These four properties are configurable per JMS destination.

Note: The default error destination has an empty string for the errorDestination property, otherwise, when a message becomes dead in its original destination and then it also becomes dead in the error destination, this may lead to several transfers of this message through error destinations and potentially this may even lead to a message delivery endless loop.

Note: The value of the errorDestination property must be the name of an already existing Queue.

Wednesday, July 14, 2010

Kinds of SAP Tables

What is transparent table? 
Transparent Table :  Exists with the same structure both in dictionary as well as in database  exactly with the same data and fields.


What is a cluster table? Where and when we use these tables?
Pooled Table : Pooled tables are logical tables that must be assigned to a table pool when they are defined. Pooled tables are used to store control data.  Several pooled tables can be combined in a table pool. The data of these pooled tables are then sorted in a common table in the database.


What is a pool table?  Where and when we use these tables?
Cluster Table :  Cluster tables are logical tables that must be assigned to a table cluster when they are defined. Cluster tables can be used to store control data.  They can also be used to store temporary data or texts, such as documentation.



What is the major difference between Standard tables, Pooled tables and Clustered Tables?

A transparent table is a table that stores data directly. You can read these tables directly on the database from outside SAP with for instance an SQL statement.

Transparent table is a one to one relation table i.e. when you create one transparent table then exactly same table will create in data base and if is basically used to store transaction data.

A clustered and a pooled table cannot be read from outside SAP because certain data are clustered and pooled in one field.

One of the possible reasons is for instance that their content can be variable in length and build up. Database manipulations in ABAP are limited as well.

But pool and cluster table is a many to one relationship table. This means many pool table store in a database table which is know as table pool.

All the pool table stored table in table pool does not need to have any foreign key relationship but in the case of cluster table it is must. And pool and cluster table is basically use to store application data.

Table pool can contain 10 to 1000 small pool table which has 10 to 100 records. But cluster table can contain very big but few (1 to 10)  cluster table.

For pool and cluster table you can create secondary index and you can use select distinct, group for pool and cluster table. You can use native SQL statement for pool and cluster table.

A structure is a table without data. It is only filled by program logic at the moment it is needed starting from tables.

A view is a way of looking at the contents of tables. It only contains the combination of the tables at the basis and the way the data needs to be represented. You actually call directly upon the underlying tables.

How many tables we will come across in ABAP?
Ans : 3 types : Pooled , clustered, Transparent

How many kinds of internal table are there?
Ans:  5 Types.
Standard Table,
Sorted Table,
Index Table,
Hashed Table,
Any Table ( Generic type , Rarely used ) 

Wednesday, May 19, 2010

Some more NetWeaver QA on Installation and Migration

Can I install an SAP NetWeaver 7.0 SR3 system on 32-bit platforms?

As of SAP NetWeaver 7.0 SR3 (SPS 14) you can install only dialog instances on 32-bit platforms. All remaining instances (central instance, database instance, central services instance (SCS), ABAP central services instance (ASCS)) you can install only on 64-bit platforms.

Is SAP Solution Manager 7.0 a new release following SAP Solution Manager 4.0?

SAP Solution Manager 4.0 SR4 (SPS 15) was renamed to SAP Solution Manager 7.0 to make the name compliant with SAP NetWeaver 7.0.

The new name SAP Solution Manager 7.0 brings with it the advantage of both SAP Solution Manager and SAP NetWeaver having the same release number, and is aligned with SAP's decision to streamline its product offerings. This decision also highlights the close alignment and joint go-to-market strategy of these two products.

The renaming of SAP Solution Manager 4.0 to SAP Solution Manager 7.0 is a name change only, therefore the SAP Solution Manager functionality, release schedule, installations, or other technical aspects of SAP Solution Manager will not be affected.


How can I install SAP Solution Manager Diagnostics on an SAP Solution Manager 3.2 system upgraded to SAP Solution Manager 7.0?

Proceed as follows:
  1. Install the Java Add-In for ABAP with SAPinst.
  2. Deploy LMSERVICE*_.SCA with SAPinst
    For more information, see the installation guides at http://service.sap.com/instguides -> SAP Components -> SAP Solution Manager -> Release 7.0

Can I install an SAP NetWeaver 7.0 system as a non-Unicode system?

You can install an SAP NetWeaver 7.0 SR1 or SR2 ABAP system as a non-Unicode system. When you install an SAP NetWeaver 7.0 SR1 or SR2 dual-stack system (ABAP+Java), the ABAP part of this system can also be installed as non-Unicode. The Java part of this system can only be installed as Unicode.
As of SAP NetWeaver 7.0 SR3, you cannot install an SAP NetWeaver 7.0 ABAP system or the ABAP part of an SAP NetWeaver 7.0 SR3 dual-stack system as non-Unicode any longer.
However, non-Unicode is still supported when you perform the system copy for an SAP system upgraded to SAP NetWeaver 7.0 SR3.


Where can I find the Configuration Wizard?

The Configuration Wizard is a Java application in the SAP NetWeaver Administrator:
http://:/nwa -> Configuration Management -> Scenarios -> Configuration Wizard
or open the application with the following shortcut directly:
http://:/nwa/cfg-wizard



What are the benefits of using the Configuration Wizard?

  • You only have to read a reduced number of guides and get familiar with a reduced number of tools.
  • You do not need to check conflicting configuration settings and mutual dependencies between components, as the configuration wizard takes care of these dependencies.
  • Reduced time and effort for setup

How can I cancel/re-execute configuration task which has "Currently executing" status?

This may happen if your web session has expired during execution of configuration task which was not able to finish in the background because it needed some user input/response. The next time you login to Configuration Wizard if the server is not restarted you will see this task with "Currently executing" status which prevents you from re-executing the task. The status can be cleared by restarting the "tc~lm~ctc~cul~startup_app" application. To do this go to
http://:/nwa -> Operation Management -> Systems -> Start & Stop -> Java EE Applications
Then stop and start the application.



What takes place during a homogeneous system copy?

The main purpose of a homogeneous system copy is to build a test, demo or training system or to move a system to a new hardware. The difference from a heterogeneous system copy is that both the database and operating system remain the same. Because of this on some platforms there is the possibility to perform the copy using database dependent methods. Please read SAP Note 89188. Please note - no matter if you change the version or bit version of either the operating system or the database, the system copy is still considered to be a homogeous system copy (e.g. system copy from Windows 2000 32-bit to Windows 2003 x64).


What takes place during a heterogeneous system copy (migration)?

The main purpose of a heterogeneous system copy is to create a copy of an already existing R/3 system on the platform where either operating system, or database, or both differ from the operating system/database of the source system.
The whole migration process consits of five main steps:
a) preparation steps on the source system
b) export of the source system data into database-independent format. In a ABAP+ Java system it#s required to both export the ABAP and the Java stack.
c) transfer of the data made during the export
d) new system installation together with data import
e) post-processing steps within the target system


Which tools are used during a migration on source and target systems?

The main programs used for the migration are - depending on the kernel release 'R3SETUP' or 'sapinst'. When working, they call some other programs for particular purposes: R3LOAD, R3LDCTL, R3SZCHK. There are also several command files or, in another words, installation templates, that have the extensions R3S (R3SETUP) or xml (sapinst).
For the kernel-tools used during a migration some rules should be followed:
a) Tools used at the export on the source system must have the same version as on the target system.
b) Tools used must all have the same kernel-version. (do not mix up kernel-tools of different releases)
c) Tools must have the same kernel release as the system which is migrated.
The Java system copy tools do not depend on the kernel-version and you can always use the latest version of these tools.For details on this please refer to note #784118.
These rules should be applied unless otherwise specified in an appropriate installation/migration note or guide. Please keep this in mind when downloading a patched version of any mentioned tool from SAP Service Marketplace.


What is the purpose of the files of different types that are used during a migration?

DDL.TPL is used for creation of table/index definition in database specific syntax, contains negative list of tables, views and indexes, assignment of TABARTs to storeage unit and next extent size of table/index. TABART stands for a data class. Fore more details on this please refer to note #46272.
SAP<.STR contains table/index definitions from ABAP Dictionary.
SAP.TPL, export directory, block and file sizes.
SAP. - (e.g.001, 002), so called dump files contains the data of all tables of a tabart in a non-platform-specific file format.
These are binary files and they should never been changed by any editor.
SAP.EXT contains initial sizes for tables and indexes. Not applicable to some RDBMS (e.g. MS SQL Server).
SAP.TOC contains position of table data within the corresponded dump file, name of the dump file , time stamp of unload, table rows number.
TOC files must never been changed besides the approval of SAP Support is given.
SAP.log contains useful information in case of error and for restart point.
SAP.TSK files used by R3load as of release 6.20. For details please refer to note # 455195


I am considering a database/operating system change. Are there any requirements that should be met before the migration starts?

A heterogeneous system copy requires a certified consultant responsible for the migration as well as the migration services if a productive system is affected. Please refer to SAP Note ' 82478 where the requirements are described in detail.


How and from where can I get all the necessary tools for a migration?

To order a Migration Kit please create customer message under XX-SER-SWFL-SHIP component and specify exact OS and DB versions as well as Kernel Release of the system you would like to migrate. A migration guide can be downloaded at: http://service.sap.com/instguides. A list of notes with the most up to date information might be found in beginning of the migration guide.


Is there anything else I need to have/ to do/ to know before doing a migration?

You also need an installation package to build up the target system and installation guide, which you can get in the same way as the Migration Kit and System Copy Guide. Please also read carefully Note #82478 and information stored under http://service.sap.com/osdbmigration.
You may find very useful information in the SAP OS/DB Migration Service and SAP OS/DB Migration Service Planning Guide (please, pay attention to chapter "Organizational Steps"), where you also can find out all prerequisites and requirements for this procedure. Please, note that the migration must be performed by a Basis consultant with special certification for OS/DB Migration.


How can I check whether a migration of a specific product/os-db-combination is supported?

Please check whether both the source and the target product/os-db-combination is supported first. Please refer to the Platform Availability Matrix on http://service.sap.com/pam for this.
In addition please check both the system copy guide and the relevant notes for any restrictions. In some exceptional cases it may be necessary to set up a pilot project for a specific system copy.
Fore more details regarding the availability of system copy procedures odf BW 3.0B/3.1 and SAP NetWeaver 04 systems please refer to: http://service.sap.com/bi --> Services & Implementation --> System Copy and Migration.
Please also refer to the following notes:
#777024 - BW3.0 and BW3.1 System Copy
#771209 - NW04: System copy (supplementary note)
#888210 - NW04s: System copy (supplementary note)
#543715 - Projects for BW Migrations and system copies
For more details on the system copy of 'SAP Web AS Java 6.40' based systems please refer to note #785848 on restrictions and procedures.


Where can I find information on how to optimize the overall runtime of a system copy?

You may refer to: HTTP://service.sap.com/systemcopy -> System Copy & Migration -> Optimization for this.


Are there any restrictions regarding system copies in general?

Yes, there are. You should always refer to the corresponding system copy guide to check the details. For instance for the system copy of ABAP+Java or Java systems of release NW 7.0 the following applies:
- "Refresh"of the database is not supported. A"refresh" of the database means that only the database is loaded with the content of a database of a different system. As in this scenario no migration controller is invoked, this is not supported.
- Copying the database only is not supported
- Copying the central instance only is not supported. The migration controller deletes all dialog instances in the database, so the system is not complete any longer.
- Reinstalling the central instance without the database is not supported.
The migration controller deletes all dialog instances in the database, so the system is not complete any longer.


Is it possible to perform a final migration of a productive system without a "test" run?

No, you should never do this. You should perform a test migration on a comparable hardware with a system which is a copy of the productive database. This is necessary both to get an idea of the overall runtime of the productive migration an to recognize major issues at the export/import before the final migration.
The same applies for the migration key. That means the migration key is generated as a self-service and should be tested before the productive migration. In case of any issues with the key generated it's not possible to create migration keys by the weekend support.

Sunday, May 9, 2010

Configure ALE

Tcode: SALE
Step 1: Create Logical System
Next screen will be
Create logical systems YLS800, YLS810
Save it and back
Step 2 : Assign Client to Logical System
Dbl Click on 800 Client and give the logical system as YLS800
Dbl Click on 810 Client and give the logical system as YLS810
Step 3 : Create RFC destination
Give the RFC destination Connection type, Description and Target Host
Click on Logon/Security
Give the Language Client, User , Password
Click on Test Connection and Remote Logon
Step 4 : Create Port WE21
Select own port name and give the own port name and click on continue
Give the Description and RFC Destination created by you ie YLS810RFC
Step 5 : Create Partner Profiles WE20
Give the Partner no Parrtner type Type Agent Lang and save it
Click on Outbound Parameters
Give the Message type , Reciever Port Basic type and save it
Step 6 : Create Distribution Model
Click on Create Message type give the shorttext and Technical name and continue
Step 7 : Generate Partner Profiles
Enivornment -> Generate Partner Profiles
Give the logical system name
Step 7 :
save it , So material 858 Created
Step 8 : Send material BD10
Step 9 : Check the Idoc WE02
Click on F8
Step 10 : Execute the program RBDMOIND- Status Conversion with Successful tRFC Execution
You can check the status record status 12 appears.
Step 11. Login 810 ABAP EDITOR SE38
Execute the program RBDAPP01 - Inbound Processing of IDocs Ready for Transfer
S\tep 12 : Idoc List WE02
Check the entrty in MARA TABLE

Enable an SAP R/3 System send Idocs to SAP PI


Maintain the Sender R3 system 1. SM59 : Create a RFC destination to XI
2. WE21 : Create a TRFC Port ->Specify the RFC Destination Created
3. BD54 : Create a Logical System for the Idoc Receiver
4. WE20 : Create Partner Profile ->Maintain Outbound and the Inbound Parameters

Log on to XI System:
1. SM59 : RFC Destination for Sender System
2. IDX1 : Create the port to get Idoc Metadata from Sender System ( The Port Name must match the port name in the idoc header - Usually in format SAP. eg. SAPID1 [Optional Step. Not mandatory]
3. IDX2 : Maintain the Idoc Metadata. This is needed only by XI, and not by other SAP systems. IDX2 is needed because XI needs to construct IDoc-XML from the IDoc. No other SAP system needs to do that.


To Enable Acknowledgement:
SXMB_ADM ->Integration Engine Configuration ->Specific Configuration ->Add New entry -> Select parameters as:
Category: RUNTIME
Parameters: ACK_SYSTEM_FAILURE
Current Value: 1


Go to SLD:
1. Create Technical System: Choose WEB AS ABAP if the system is R/3 -> Define SAP SID, Installation Number and Database Host Name a Maintain message Server Details according to Sender System -> Maintain Client Details of Sender System ->Select a Installed Product for Sender System


2. Create Business System: Choose WEB AS ABAP if the system is R/3 -> Choose the Technical System and the client Created Before -> Choose the Installed Product -> Set:
Business System Role: Application System
Related Integration Server: Integration Server


Idoc Tunneling in XI
To prevent performance consuming XMLization and de-XMLization IDOCs are tunneled through SAP XI IDOC adapter meaning no XMLiziation is executed before the IDOC is passed onto the pipeline. Note the message is converted to UTF-8 codepage.
Start transaction SXMB_ADMIN on SAP XI.
Select option Configuration->Integration Engine Configuration and add two parameter EXTERNAL_MAPPER in the category IDOC.
Configure the parameter according to the conditions below:
If Message Code and Message Function are specified in the partner profile:
..ReceiverPort...
If only the Message Code is specified in the partner profile:
..ReceiverPort..
If only Message Function is specified in the partner profile:
..ReceiverPort...


Integration Builder
 Integration Directory:
 1. Add Business System:  Adapter Specific Identifiers -> 'Logical System' identical to the 'Logical System Name' in the SLD Business System  
 2.IDoc Receiver Communication Channel: port the same as XI System IDX1

Wednesday, May 5, 2010

Archive Log Generation during Oracle Online Backup


When the oracle online backup is kick started a SQL command issued which puts the tablespaces into backup active mode, then copies the datafiles to disk / tape. Once the backup is complete, anothe sql statement issued which takes the tablespace out the backup mode.
The sqls are
alter database/tablespace begin backup;
alter database/tablespace end backup;
This can be seen via v$backup dynamic view.
SQL> select * from v$backup ;

     FILE# STATUS                CHANGE# TIME
---------- ------------------ ---------- ---------
         1 ACTIVE                 831311 05-MAY-10
         2 ACTIVE                 831311 05-MAY-10
         3 ACTIVE                 831311 05-MAY-10
         4 ACTIVE                 831311 05-MAY-10
         5 ACTIVE                 831311 05-MAY-10
         6 ACTIVE                 831311 05-MAY-10

6 rows selected.

So, during the backup, all the tablespaces/datafiles are sealed and are not writable. Transaction changes are recorded in rego log files or sga or rollback/undo segments or something and then will be written back into the datafile when the database/tablespace is taken out of the backup mode.

We might have observed during the online backup there is a change of huge generation of redo data which may impact the database performance if the database is in archivelog mode.

Let's see what are steps taken during the online backup.

1. The tablespace is checkpointed.
2. The checkpoint SCN number in the datafile headers cease to increment with checkpoints.
3. Full images (before and after image) of the changed database blocks are written to the redologs.

When you issue

alter tablespace tbs_name begin backup;

A checkpoint is performed against the target tablespace and the datafile header is frozen, so, no more updates are allowed on the datafile header, this is for the database to know which was the last time the tablespace had a consistent image of the data.

But during the backup, the corresponding datafiles in the tablespace allow just normal read/write operations, that is IO activity is not frozen.

In case of redo log generation, each block will be recorded into the redo logs files, the first the block is changed. So, if a row is modified for the first time inside the data block since online backup is started, the complete block image is recorded in the redo log files but the subsequent transactions on the block will only record the transaction just as normal.

Above three steps are required to guarantee consistency during the file is restored and recovered. By the freezing the checkpoint SCN in the file headers, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should  look for the redo log file containing that SCN, and apply recovery starting there. Note that checkpoints to the datafiles in online backup mode are not suppressed during the backup, only the incrementing of the main checkpoint SCN flag. An "Online backup checkpoint" SCN marker in the file header continues to increment as periodic or incremental checkpoints progress normally.

By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in online backup mode will also be available in the redologs in case they are ever used for a recovery.

Now many one claim that during online backup there is excessive redo log generation than normal mode. It actually depends on the amound of block changes during online backup. Because the first time a block is changed logging of full images of the changed blocks in these tablespaces are recorded to the redo logs. Normally, Oracle logs an entry into the database block. But during the online backup, by logging full images of changed database blocks to the redologs, Oracle eliminates the possibility of the backup containing irresolvable split blocks. To understand this reasoning, you must first understand what a ssplit block is.

Typically, Oracle database blocks are a multiple of OS blocks. For instance, most AIX installations offer a default block size of 4MB and where as Oracle's default blcok size is of 8KB. That means the file system stores data in 4MB chunks and oracle reads or writes in 8K chunks or multiples of 8K. While backing up a datafile, backup utility (brbackup in the case of SAP) makes a copy of the datafile from the filesystem, using OS utilities such as copy, dd, cpio or ocopy. While making this copy, it is reading in OS block sized increments. If the database writer happens to be writing a DB block into the datafile at the sametime that backup utility is reading that block's constituent OS blocks, the backup copy of the DB block could contain some OS blocks from before the database performed the write, and some from after. This would be a split block.

By logging the full block image of the changed block to the redologs, it guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the redologs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved through application of full block images from the redologs.