Skip to Content
Contact Us
Call Me Now Call Offline
SAP can call you to discuss any questions you have.
Chat Now Chat Offline
Get live help and chat with an SAP representative.
Contact Us
E-mail us with comments, questions or feedback.

ERP

Assemble to Order

The Assemble-to-Order (ATO) Benchmark integrates process chains across SAP Business Suite components. The ATO scenario is characterized by high volume sales, short production times (from hours to one day), and individual assembly for such products as PCs, pumps, and cars. In general, each benchmark user has its own master data, such as material, vendor, or customer master data to avoid data locking situations. However, the ATO Benchmark has been designed to handle and overcome data locking situations – the ATO benchmark users access common master data, such as material, vendor, or customer master data.

 

Benchmark Results

 
Technical Components

Financials CO-OM, CO-PC, FI-GL
Logistics MM-IM, LE-WM, LO-VC, CS, SD-BIL, SD-SHP, SD-SLS, PP-BD, PP-ATO
Basis BC-BMT, BC-SRV
Human resources PD-OM
Cross application CA-CL

Published results of the ATO Benchmark contain throughput numbers; for example 2,500 assembly orders per hour. An assembly order is a request to assemble pre-manufactured parts and assemblies to finished products based to an existing sales order.

Collapse
Dialog Steps
  • 0. Logon
  • 1. Main screen
  • 2. Call /nVA01 (create customer order)
  • 3. Enter order and organizational data
  • 4. Enter customer and material
  • 5. First level characteristic value assignment
  • 6. Second level characteristic value assignment
  • 7. Second level characteristic value assignment
  • 8. Choose Back (Control of resulting price)
  • 9. Choose Save (Create assembly order)
  • 10. Call /nMFBF (Repetitive Manufacturing backflush)
  • 11. Enter sales order data
  • 12. Choose Save
  • 13. Call /nVL01N (Create outbound delivery)
  • 14. Select sales order
  • 15. Choose Save
  • 16. Call /nLT03 (create transfer order)
  • 17. Save transfer order
  • 18. Call /nLT12 (confirm transfer order)
  • 19. Confirm transfer
  • 20. Call /nSO01 (SAP office - inbox)
  • 21. Select workflow-item
  • 22. Start goods issue via workflow
  • 23. Call /nVF01 (create invoice)
  • 24. Choose Save invoice
  • 25. Call /nend
  • 26. Confirm log off
 

User interaction steps 2 - 24 are repeated n times (23 user interaction steps --> min. 230 sec. duration).

Business aspect:
One run corresponds to the full assemble-to-order scenario for one item.

Collapse

Cross Application Time Sheet (CATS)

The Cross Application Time Sheet (CATS) can be used by employees or personal administrators to track employee working times. Time data is recorded with information referring to orders and cost centers, for example, and can be transferred to corresponding applications and components of the SAP Business Suite.

 
Dialog Steps
  • 0. Logon
  • 1. Call transaction CAC2
  • 2. Sort employees
  • 3. Choose four employees (scroll down once)
  • 4. Choose Create time data
  • 5. Enter time data and choose Return
  • 6. Choose Save
  • 7. Choose Sort
  • 8. Select another employee
  • 9. Call help requests for cost center
  • 10. Select cost center
  • 11. Call help request for internal order
  • 12. Select internal order
  • 13. Enter new time data and choose Return
  • 14. Choose Save
  • 15. Choose Sort
  • 16. Select five employees
  • 17. Change data
  • 18. Choose Last page
  • 19. Select last line
  • 20. Select all
  • 21. Choose Delete
  • 22. Choose Save
  • 23. Log off
 

Business aspect:
One run corresponds to the processing of 80 activity reports for 5 employees

Collapse

Financial Accounting (FI)

The following table provides an overview of the business process the Financial Accounting (FI) Benchmark simulates. In this scenario, four financial documents with three line items are posted, with the line items in the fourth posting displayed. Following that, 44 open items of one debtor, including the previously posted documents, are displayed, and four are balanced. At the end of each run there are exactly 40 open items for each debtor, which serve as the basis for a new run.

Dialog Steps
  • 0. Logon
  • 1. Main screen
  • 2. Call Post Document
  • 3. Create customer item
  • 4. Create general ledger account item
  • 5. Choose Post
  • 6. Call Display document
  • 7. Enter previous posted document
  • 8. Double-click first line
  • 9. Call Customer line item display
  • 10. Enter data and choose Execute
  • 11. Select first line
  • 12. Call Post incoming payments
  • 13. Enter header data
  • 14. Choose Process open items
  • 15. Select item 5 of the list
  • 16. Scroll down to the end
  • 17. Select last item
  • 18. Deactivate all selected items
  • 19. Choose Post
  • 20. Call /nend
  • 21. Confirm log off
Collapse

Human Resources - Payroll (HR)

'In contrast to online benchmarks, the Human Resources - Payroll (HR) Benchmark is a report with variants that is run as a batch job on the basis of events. The report, called RPCALCD0, exercises the German payroll program.

 

Procedure

While the procedure for this report is complex, the following features can provide insight into its underlying business processes. Starting with the personal number and the payroll period, the report runs through these transactions in the German payroll program:

  • Basic data
  • Last payroll results
  • Capital formation
  • Company pension plan
  • Net payments/deductions and transfers
  • Final processing
 

The benchmark run has the following procedure:

  • 1. Two users log on to the system.
  • 2. The first user resets the data.
  • 3. When the high-load phase of the run has come to an end (when the first batch process has come to an end), the first user logs off.
  • 4. The second user waits until all processes have finished and then triggers the evaluation before logging off.
Collapse

Material Management (MM)

The Materials Management (MM) Benchmark takes you through a series of steps to create a purchase requisition for five materials (transaction ME51N), a purchase order for the five materials (ME21N), a goods receipt (MIGO), and an invoice (MIRO) for the purchase order.

Dialog Steps
  • 0. Logon
  • 1. Main screen
  • 2. Call /nME51N (Create purchase requisition)
  • 3. Enter data
  • 4. Choose Post
  • 5. Call /nME21N (Create purchase order)
  • 6. Enter data
  • 7. Choose Post
  • 8. Call /nMIGO (Goods receipt purchase order)
  • 9. Enter data
  • 10. Choose Execute
  • 11. Choose Post
  • 12. Call /nMIRO (Create invoice), enter company code
  • 13. Enter basic data
  • 14. Choose Payment
  • 15. Enter data
  • 16. Choose Post
  • 17. Call /nend
  • 18. Confirm log off
Collapse

Production Planning (PP)

The Production Planning (PP) Benchmark consists of the following transactions:

  • Create a production order. (CO01)
  • Change the amount on the production order, release order for production, and print order. (CO02)
  • Create two completion confirmations for the production order (milestone confirmation with back flush and final confirmation). (CO11N)
  • Post goods receipt for the order. (MB31)
  • Settle the production order. (CO02)
Dialog Steps
  • 0. Logon
  • 1. Main screen
  • 2. Call Create production order
  • 3. Enter general data
  • 4. Enter data
  • 5. Choose Save
  • 6. Call Change production order
  • 7. Enter order number, choose Return
  • 8. Change data
  • 9. Select order
  • 10. Choose Functions
  • 11. Choose Release
  • 12. Choose Order
  • 13. Choose Print
  • 14. Choose OK
  • 15. Choose Save
  • 16. Call Create completion confirmation
  • 17. Enter data
  • 18. Choose Save
  • 19. Call Create completion confirmation
  • 20. Enter data
  • 21. Choose Save
  • 22. Call Create goods receipt
  • 23. Enter data
  • 24. Copy data
  • 25. Choose Save
  • 26. Call Change production order
  • 27. Choose Environment
  • 28. Choose Individual processing
  • 29. Enter data and choose execute
  • 30. Call /nend
  • 31. Confirm log off
Collapse

Sales and Distribution (SD and SD-Parallel)

The Sales and Distribution (SD) Benchmark covers a sell-from-stock scenario, which includes the creation of a customer order with five line items and the corresponding delivery with subsequent goods movement and invoicing. It consists of the following transactions:

  • Create an order with five line items. (VA01)
  • Create a delivery for this order. (VL01N)
  • Display the customer order. (VA03)
  • Change the delivery (VL02N) and post goods issue.
  • List 40 orders for one sold-to party. (VA05)
  • Create an invoice. (VF01)

Each benchmark user has his or her own master data, such as material, vendor, or customer master data to avoid data-locking situations.

Important note: On January 1, 2009, the SAP SD Benchmark was updated. Alongside the upgrade to SAP Business Suite 7 and the SAP enhancement package 4 for SAP ERP 6.0, a number of additional, necessary updates were implemented. Business changes constantly, for example, Unicode and the use of the new general ledger are now common practice for SAP customers across all industries, and the SAP standard application benchmarks need to reflect this change. The updates are transparent; that is, the steps of the benchmark scenario remain unchanged. Please be aware that these changes make the SD benchmark more resource-intensive, which has a direct impact on the benchmark results.

 
 
 
SD Versus SD-Parallel
The SD-Parallel Benchmark consists of the same transactions and user interaction steps as the SD Benchmark. This means that the SD-Parallel Benchmark runs the same business processes as the SD Benchmark. The difference between the benchmarks is the technical data distribution.
Collapse
An Additional Rule for Parallel and Distributed Databases
It is generally accepted that data distribution can significantly influence the benchmark result in a parallel environment. Therefore, in May 1996, the SAP Benchmark Council redefined data distribution to establish a means to reproduce and compare results within parallel benchmarks. The additional rule is: Equally distribute the benchmark users across all database nodes for the used benchmark clients (round-robin-method).
Collapse
How to Appraise the Benchmark Results

Parallel benchmarks depend on the additional parameter "data distribution," which, by definition, does not exist in single database benchmarks. All tests show that the scalability of parallel databases significantly depends on the data distribution. When you compare SD Benchmark with SD-Parallel Benchmark results, you have to remember this difference.

Collapse
Dialog Steps
  • 0. Logon
  • 1. Main screen
  • 2. Call /nVA01 (Create customer order)
  • 3. First screen
  • 4. Second screen (with five items)
  • 5. Choose Save
  • 6. Call /nVL01N (Create a delivery)
  • 7. First screen
  • 8. Choose Save
  • 9. Call /nVA03 (Display customer order)
  • 10. Choose Enter
  • 11. Call /nVL02n (Change delivery)
  • 12. Choose [F20] (Posts goods issue)
  • 13. Call /nVA05 (List orders)
  • 14. Choose Enter
  • 15. Call /nVF01 (Create invoice)
  • 16. Choose Save
  • 17. Call /nend
  • 18. Confirm log off

User interaction steps 2 - 16 are repeated n times (15 user interaction steps --> min. 150 sec. duration).

Business aspect:
One run (user interaction steps 2 - 16) corresponds to the selling of five items.

Collapse

Benchmark Results

SAP ATO Standard Application Benchmark Results, Two-Tier Internet Configuration, R/3 Releases 4.0 - 4.6
Date of Certification
(mm/dd/yyyy)
  Technology Partner   Fully Processed Assembly Orders Per Hour   Operating System  Release   RDBMS Release   R/3 Release   Central Server   Central server memory (MB)   Certification Number
09/02/2011   Oracle   206360   Solaris 10   Oracle 11g   SAP enhancement package 4 for SAP ERP 6.0   SPARC Enterprise Server M9000, 64 processors / 256 cores / 512 threads, SPARC64 VII+, 3.00 GHz, 64 KB (D) + 64 KB (I) L1 cache per core, 12 MB L2 cache per processor,   2097152   2011033
09/02/2011   Fujitsu   206360   Solaris 10   Oracle 11g   SAP enhancement package 4 for SAP ERP 6.0   SPARC Enterprise Server M9000, 64 processors / 256 cores / 512 threads, SPARC64 VII+, 3.00 GHz, 64 KB (D) + 64 KB (I) L1 cache per core, 12 MB L2 cache per processor,   2097152   2011033
03/11/2003   Fujitsu Siemens Computers   12170   Solaris 8   Oracle 9i   4.6C   Fujitsu Siemens Computers PRIMEPOWER 900, 16-way SMP, SPARC64TM V, 1.35 GHz, 256 KB L1 cache, 2 MB L2 cache   65536   2003012
03/11/2003   Fujitsu Siemens Computers   6220   Solaris 8   Oracle 9i   4.6C   Fujitsu Siemens Computers PRIMEPOWER 900, 8-way SMP, SPARC64TM V, 1.35 GHz, 256 KB L1 cache, 2 MB L2 cache   32768   2003011
12/13/2002   HP   3090   HP-UX 11i   Oracle 9i   4.6C   hp rx5670, 4-way SMP, Itanium II, 1 GHz, 3 MB L3 cache   24576   2002069
03/22/2002   HP   3740   HP-UX 11i   Oracle 9i   4.6C   HP Server Model RP7410, 8-processors SMP, PA-RISC 8700 750 MHz, 2.25 MB L1 cache   16384   2002016
09/17/2001   HP   7000   HP-UX 11i   Oracle 8.1.7   4.6C   HP RP8400, 16-way PA-RISC 8700 750 MHz, 2.25 MB cache   32768   2001034
05/29/2001   Fujitsu Siemens   34260   Solaris 8   Oracle 8.1.7   4.6B   Fujitsu Siemens Primepower 2000, 128-processors SMP, Sparc64 560 MHz, 8 MB L2 cache   131072   2001018
04/12/2001   HP   18870   HP-UX 11.11   Oracle 8.1.6   4.6B   HP9000 Superdome Enterprise Server, 64-way SMP, PA-RISC 8600, 552 MHz   131072   2001014
02/05/2001   HP   1610   Windows 2000   SQL Server 2000   4.6B   HP NetServer LXr8500, 8-way SMP, Pentium III Xeon 700 MHz, 2 MB L2 cache   8192   2001003
12/11/2000   HP   16480   HP-UX 11.11   Oracle 8.1.6   4.6B   HP9000 Superdome Enterprise Server, 64-way SMP, PA-RISC 8600, 552 MHz   131072   2000030
11/10/2000   Bull   8570   AIX 4.3.3   DB2 UDB 7.1   4.6B   Bull Escala Model EPC 2450, 24-way SMP, RS64-IV 600 MHz, 16 MB L2 cache   32768   2000027
11/10/2000   Bull   6300   AIX 4.3.3   DB2 UDB 7.1   4.6B   Bull Escala Model EPC 2400, 24-way SMP, RS64-III 450 MHz, 8 MB L2 cache   32768   2000026
10/13/2000   IBM   8570   AIX 4.3.3   DB2 UDB 7.1   4.6B   IBM eServer pSeries 680, 24-way SMP, RS64-IV 600 MHz, 16 MB L2 cache   32768   2000025
10/13/2000   IBM   6300   AIX 4.3.3   DB2 UDB 7.1   4.6B   IBM RS/6000 Enterprise Server S80, 24-way SMP, RS64-III 450 MHz, 8 MB L2 cache   32768   2000024
03/31/2000   IBM   7700   AIX 4.3.3   DB2 UDB 6.1   4.0B   IBM RS/6000 Enterprise Server S80, 24-way SMP, RS64-III 450 MHz, 8 MB L2 cache   32768   2000008
09/13/1999   Compaq   2610   Tru64 Unix 4.0 F   Oracle 8.0.4   4.0B   AlphaServer GS 140, 8 way SMP, Alpha 21264A EV67 700 MHz, 8 MB L2 cache   16384   1999028
09/10/1999   HP   2260   HP-UX 11.0   Informix 7.30 FC7   4.0B   HP9000 N4000, 8-way SMP, PA-8500 440 MHz, 1.5 MB L1 cache   16384   1999026
06/14/1999   IBM   2390   QS/400 V4R4   DB2 UDB for AS/400 V4R4   4.0B   IBM AS/400e Model S40-2208 12-way SMP, Power PC 262 MHz, 8 MB L2 cache   6144   1999013
03/12/1999   Sun   2020   Sun Solaris 2.6   Informix 7.30 UC7   4.0B   Sun 6000, Model E 6000, 24 way SMP, UltraSparc II 250 MHz, 1MB L2 cache   16384   1999006
12/18/1998   Compaq   780   Digital Unix 4.0 D   Oracle 8.0.4   4.0B   Compaq Alpha Server 4100 5/600, 4 way SMP, 600 MHz, 4MB L2 cache   8192   1998040
 
SAP ATO Standard Application Benchmark Results, Three-Tier Internet Configuration, R/3 Releases 4.0 - 4.6
Date of Certifcation
(mm/dd/yyyy)
  Technology Partner   Fully Processed Assembly Orders Per Hour   Operating
System Release
  RDBMS Release   R/3 Release   Database Server   Database server memory (MB)   Number & Type of Application Servers   Certification Number
01/17/2002   HP   144090   HP-UX 11i   Oracle 9i   4.6C   HP 9000 Superdome Enterprise Server, 64-way PA-RISC 8700, 750 MHz, 2.25 MB L1 cache   131072   (total 79), 78 x HP RP7400, 8-way PA-RISC 8600, 550 MHz, 1.5 MB L1 cache   2002003
12/04/2001   HP   130570   HP-UX 11i   Oracle 9i   4.6C   HP 9000 Superdome Enterprise Server, 64-way PA-RISC 8700 750 MHz, 2.25 MB L1 cache   131072   (total 71), 70 x HP RP7400, 8-way PA-RISC 8600, 550 MHz, 1.5 MB L1 cache   2001047
09/01/2000   Bull   54170   AIX 4.3.3   DB2 V7.1   4.6B   Bull Escala Model EPC2400, 24-way RS64-III 450 MHz, 8 MB L2 cache   32768   (total 11), 10 x Escala Model EPC2400, 24-way RS64-III, 450 MHz   2000020
08/23/2000   IBM   54220   AIX 4.3.3   DB2 V7.1   4.6B   IBM RS/6000 Enterprise Server Model S80, 24-way RS64-III 450 MHz, 8 MB L2 cache   32768   (total 11), 10 x IBM RS/6000 Enterprise Server Model S80, 24-way RS64-III, 450 MHz   2000019
06/13/2000   HP   22610   HP-UX 11.0   Oracle 8i Rel. 2   4.6B   HP 9000 V2600, 16-way PA-RISC 8600 550 MHz, 1 MB L2 cache   8192   (total 18), 17 x HP9000 N4000, 8-way PA-RISC 8500 440 MHz   2000013
04/26/1999   HP   9130   Windows NT 4.0   Oracle 8.0.5   4.0B   HP Netserver LXr 8000, 4-way Pentium III Xeon 500 MHz, 2 MB L2 cache   2048   13 x Netserver LXr 8000, 4-way Pent III Xeon 500 MHz   1999011
SAP MM Standard Application Benchmark Results, Two-Tier Internet Configuration, R/3 Release 2.2
Date of Certification (mm/yyyy) Technology Partner Number of Benchmark Users Benchmark Type Average Dialog Response Time (sec) Operating System - Release RDBMS Release R/3 Release Central Server Central Server Memory (MB)
03/1995 Digital 360 MM 0.65 OSF/1 3.2 Oracle 7.1.4 2.2 C DEC 8400 TurboLaser, 8-way SMP 8,192
SAP PP Standard Application Benchmark Results, Two-Tier Internet Configuration, R/3 Release 2.2
Date of Certification (mm/yyyy) Technology Partner Number of Benchmark Users Benchmark Type Average Dialog Response Time (sec) Operating System - Release RDBMS Release R/3 Release Central Server Central Server Memory (MB)
03/1995 Digital 180 PP 2.00 OSF/1 3.2 Oracle 7.1.4 2.2 C DEC 8400 TurboLaser, 8-way SMP 8,192
Back to top