CX Works

A single portal for curated, field-tested and SAP-verified expertise for your SAP C/4HANA suite. Whether it's a new implementation, adding new features, or getting additional value from an existing deployment, get it here, at CX Works.

Importing Offers from Amazon S3 into SAP Marketing Cloud

Price promotions are a major category of sales promotions where companies reduce the selling price of a product or service to entice customers to buy. Companies often lower prices of their products and services in order to attract more buyers which means that price promotion is a major part of sales promotion. For SAP Marketing Cloud, offers are usually created by external systems and subsequently imported into SAP Marketing Cloud. In our case, we will take a look at Price Promotion which is one type of offer.

Offers are kept in SAP Marketing Cloud and contain information such as:

  • Basic information on the offer
  • Time validity and status of the offers
  • A list of locations where the offer is valid
  • A list of products and product categories
  • Contacts to whom the offers are sent
  • Offer content

In this article, we will focus on the scenario on how to import offers created by external system to SAP Marketing Cloud. As there are many customers that use Amazon Simple Storage Service (Amazon S3) to store and retrieve data, we will store the offers in an Amazon Web Services(AWS) S3 bucket. The process of reading offers, converting and mapping into an appropriate structure will be done in SAP Cloud Platform Integration (CPI). Since the offers don't include all the necessary data, we will have to make one external call to enrich the offers during the processing in SAP CPI before importing the offers into SAP Marketing Cloud.

We will show you how to use XSLT mapping to generate OData request to fetch additional information, how such data can be temporarily stored in memory using HashMap, and how to split large volumes of data into individual offers to be imported into SAP Cloud Marketing. When the import of the offers is completed, we will delete the file from S3 bucket to avoid repeated processing. Furthermore, we would like to show you that some approaches may significantly lower performance especially when processing large volume of data. Therefore, it’s important to conduct various test scenarios in order to find out how the system performs when processing large data.

Table of Contents


Scope

To access AWS S3, we will use a Authenticating Request, where the header value also includes a signature. The detailed description of all the steps on how to calculate a signature is provided at: https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html#example-signature-calculations.

We will not deal with the steps on how to create an S3 bucket, how to create a test user, and how to grant a user a bucket permission right here. This is out-of-scope for this article.

The prerequisites for reading a file from an S3 bucket are:

  • An existing AWS S3 bucket
  • Existing credentials composed of an AWS Access Key and an AWS Secret Key


In SAP CPI, we will create an iFlow that reads a file from an S3 bucket and converts it into the appropriate format. Then, we will use parallel multicast processing in order to let one branch make an external call and temporarily store data for further processing. The next step will be to enrich offers with data acquired by the external call. This article is not intended to provide a ready to run integration scenario. For use in real-life scenarios, it is necessary to consider how to proceed in case of an error message. It is not the purpose of this article to give a complete solution, but rather to provide guidelines on the best practices on how to process large files provided by external systems. 

For better clarity, our use case is displayed in the diagram below. The whole process is described in three steps:

Step 1: Integration with an AWS S3 bucket to read the offers from the file

Step 2: Data enrichment, here we will:

  • Use a sequential multicast in iFlow to make an HTTP call
  • Demonstrate how to use XSLT to generate an HTTP request
  • Show how to temporarily store data in HashMap and enrich a message

Step 3: Importing the offers into SAP Marketing Cloud with extra focus on:

  • How to use an XML parser to modify an XML message
  • How to get rid of namespaces in an XML message




Implementation

Step 1: Integration with AWS S3

While reading files from an S3 bucket, we will use the REST API. Our request will be authenticated by AWS which requires valid credentials. We will make a REST API call directly from our Groovy code based on the information in the link: https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html

We need to create a signature using the credentials and include the signature in our REST API request. Suppose we have a file (containing the offers) saved in an S3 bucket, and we already have the credentials composed of an Access Key and a Secret Key to access the S3 bucket.



Below is a code extract necessary to make an HTTP request to AWS S3. Most of the code can be found here: https://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-java


//************* REQUEST VALUES *************    
String method = 'GET’;    
String host = 's3.eu-central-1.amazonaws.com’;    
String region = 'eu-central-1’;    
String service = 's3’;
String endpoint = 'https://s3.eu-central-1.amazonaws.com';
// Read AWS access key from security artifacts. Best practice is NOT to embed credentials in code.    
String access_key = xxx;    
String secret_key = xxx;
// Create a date for headers and the credential string
def date = new Date();
DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd'T'HHmmss'Z'");
dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));//server timezone
String amz_date = dateFormat.format(date);
dateFormat = new SimpleDateFormat("yyyyMMdd");
dateFormat.setTimeZone(TimeZone.getTimeZone("UTC"));
String date_stamp = dateFormat.format(date);
String canonical_uri = '/bucket-rv/Offers_TEST.xml';
// In the canonical request string, the empty request body has the hash : e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
String canonical_querystring = '';
String canonical_headers = 'host:' + host + '\n'+ 'x-amz-content-sha256:' + 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855' + '\n' + 'x-amz-date:' + amz_date + '\n';
String signed_headers = 'host;x-amz-content-sha256;x-amz-date';
String payload_hash = "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855";
String canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash;
String algorithm = 'AWS4-HMAC-SHA256';
String credential_scope = date_stamp + '/' + region + '/' + service + '/' + 'aws4_request';
String string_to_sign = algorithm + '\n' +  amz_date + '\n' +  credential_scope + '\n' +  generateHex(canonical_request);
byte[] signing_key = getSignatureKey(secret_key, date_stamp, region, service);
byte[] signature = HmacSHA256(string_to_sign,signing_key);
String strHexSignature = bytesToHex(signature);
String authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' +  'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + strHexSignature;
message.setHeader("x-amz-date",amz_date);
message.setHeader("x-amz-content-sha256", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
message.setHeader("Authorization", authorization_header);
message.setHeader("Host", "s3.eu-central-1.amazonaws.com");
methods to generate a signiture
String bytesToHex(byte[] bytes) {
    char[] hexArray = "0123456789ABCDEF".toCharArray();           
    char[] hexChars = new char[bytes.length * 2];
    for (int j = 0; j < bytes.length; j++) {
        int v = bytes[j] & 0xFF;
        hexChars[j * 2] = hexArray[v >>> 4];
        hexChars[j * 2 + 1] = hexArray[v & 0x0F];
    }
    return new String(hexChars).toLowerCase();
}
String generateHex(String data) {
    MessageDigest mac = MessageDigest.getInstance("SHA-256");
    byte[] signatureBytes = mac.digest(data.getBytes(StandardCharsets.UTF_8));
    StringBuffer hexString = new StringBuffer();
    for (int j=0; j<signatureBytes.length; j++) {
        String hex=Integer.toHexString(0xff & signatureBytes[j]);
        if(hex.length()==1) hexString.append('0');
    hexString.append(hex);
    }
    String encryptedSignature = hexString.toString();
    String encryptHash = encryptedSignature.replace("-","");
    //encryptHash = encryptHash.toUpperCase();
    return encryptHash;
}
byte[] HmacSHA256(String data, byte[] key) throws Exception {
    String algorithm="HmacSHA256";
    Mac mac = Mac.getInstance(algorithm);
    mac.init(new SecretKeySpec(key, algorithm));
    return mac.doFinal(data.getBytes("UTF8"));
}
byte[] getSignatureKey(String key, String dateStamp, String regionName, String serviceName) throws Exception {
    byte[] kSecret = ("AWS4" + key).getBytes("UTF8");
    byte[] kDate = HmacSHA256(dateStamp, kSecret);
    byte[] kRegion = HmacSHA256(regionName, kDate);
    byte[] kService = HmacSHA256(serviceName, kRegion);
    byte[] kSigning = HmacSHA256("aws4_request", kService);
    return kSigning;
}


Once we are done with the groovy script in our iFlow, we add a Request-Reply element and choose an HTTP adapter which enables us to send an HTTP request to AWS S3. In this article, we are not going to deal with Externalization to avoid hard-coded values in integration flows.

The “Address” is the URL of AWS S3 that is to be called from the Request-Reply element.



Enter the Endpoint of the iFlow into the Address field, and in the Authorization Tab, enter the login credentials of the SAP CPI tenant. Once we complete the first part of the scenario, we can save and deploy our iFlow, and then we can test it using Postman. 

The response will show the content of the file (offers) from an AWS S3 bucket. In our example, we can see that the offer consists of basic information such as Offer ID, Offer Name and Offer Description, Offer Validity From - To, and also sub-entities like Marketing Location and Products.



Now, let’s take a look at the second part of our scenario which describes the process of conversion, enrichment, and finalization of offers intended for Marketing. As we already mentioned at the beginning, the offers we receive from the AWS S3 bucket are incomplete and therefore we need to add other necessary entities and attributes. 

Before that, we should take a look at OData Service Metadata for offers. The following URL allows us to get the metadata file for the offers API service:


https://<hostname>-api.s4hana.ondemand.com/sap/opu/odata/sap/CUAN_OFFER_IMPORT_SRV/$metadata


The import of offer data is always started through the Import Headers entity and, in order to provide bulk processing, a deep insert on the offer entity. The offer OData resource represents an imported offer and provides basic offer header attributes that can be imported. Resource Path: 

https://<hostname>-api.s4hana.ondemand.com/sap/opu/odata/SAP/CUAN_OFFER_IMPORT_SRV/Offers

The offers data structure pictured below will be enhanced with an “OfferContent” sub-entity containing attributes like content source URL and content target URL.



We will assume that the subject of the offer are products that we have already saved in SAP Marketing Cloud under a unique product ID. Those products already include the image URLs and the target product image URLs. Our next objective will be to carry out an HTTP request during the offers processing in SAP CPI in order to obtain the missing product URLs.

Step 2: Data Enrichment - Offers

One of the approaches how to enrich data with missing information, is to make an individual request for each offer. This would mean that if there were, for example, 100 000 offers, the same number of requests to SAP Marketing Cloud would also have to be sent and it would take extremely long time to process the data. Therefore, we will take a different approach.: 

  • First, we will search for all unique product IDs in the file from AWS
  • Second, then we will generate one OData batch request so that we can receive all product image URLs from SAP Marketing Cloud, and temporarily save them in the memory
  • Next, we will match these image URLs with the product IDs mentioned in the first step


The picture below (a part of the iFlow process) displays the offer enrichment process:



After receiving the offers from the S3 bucket and removing unnecessary namespaces, we will use the sequential multicast pattern which sends the same message into two branches in a specified order. This pattern is important because the first branch won’t be executed until the second branch has been completed.

  • In Branch 1, we will use an XSLT mapping to remove whitespaces and newlines from the message at the tag level.
  • In Branch 2, we will use a predefined XSLT mapping to create a custom batch request for a OData service to fetch/obtain the product image URLs.


What the “XSLT mapping 1” looks like is shown below:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet 
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    version="2.0">
    <xsl:output method="text" indent="yes"></xsl:output>
    <xsl:template match="/">
        <xsl:for-each 
            select="distinct-values(/d/results/OfferProducts/results2/Product)">
            <xsl:text>--batch01&#10;</xsl:text>
            <xsl:text>Content-Type: application/http&#10;</xsl:text>
            <xsl:text>Content-Transfer-Encoding: binary&#10;</xsl:text>
            <xsl:text>&#10;</xsl:text>
            <xsl:text>GET ProductOriginDataSet(ProductID='</xsl:text>    
            <xsl:value-of select="."/>
            <xsl:text>',ProductOrigin=’01_HYBRIS_PRODUCT') HTTP/1.1&#10;</xsl:text>
            <xsl:text>&#10;</xsl:text>
            <xsl:text>&#10;</xsl:text>
        </xsl:for-each>
        <xsl:text>--batch01--</xsl:text>
    </xsl:template>
</xsl:stylesheet>


As we already mentioned, the input into the XSLT processing is the message (the offers), and the output will be the batch request that contains a list of GET operations, as shown below:


GET ProductOriginDataSet(ProductID='000000000010148007',ProductOrigin='01_HYBRIS_PRODUCT') HTTP/1.1
--batch01
Content-Type: application/http
Content-Transfer-Encoding: binary
GET ProductOriginDataSet(ProductID='000000000010100436',ProductOrigin='01_HYBRIS_PRODUCT') HTTP/1.1
--batch01
Content-Type: application/http
Content-Transfer-Encoding: binary
GET ProductOriginDataSet(ProductID='000000000010100554',ProductOrigin='01_HYBRIS_PRODUCT') HTTP/1.1
--batch01
Content-Type: application/http
Content-Transfer-Encoding: binary
GET ProductOriginDataSet(ProductID='000000000010100586',ProductOrigin='01_HYBRIS_PRODUCT') HTTP/1.1


To send the batch request we created in the step above to SAP Marketing Cloud Result, we will use the Request-Reply pattern. We don’t have to go into much detail on OData usage, as there are a lot of articles on this topic.

The service we are going to use is:  http://xxxxx-api.s4hana.ondemand.com:443/sap/opu/odata/sap/APIMKTPRODUCTSRV;v=0002/$batch

Once the request has been sent to SAP Marketing Cloud, the response including URLs we get, will look as follows:



In the image above, you can see an example of one product containing the necessary <d:ProductImageURL> and Web site <d:WebsiteURL> URLs received from SAP Marketing Cloud. In the next step, we will use Groovy scripts to read this response and create a HashMap to temporarily save the products together with URLs.

For our use case, we will assume that all products with image URLs are stored in SAP Marketing Cloud. In real-life cases, when the product’s URL we are querying doesn’t exist, we will receive a 404 response. Such situation should be dealt with.

The following code snippet from the Groovy script above shows a way of storing multiple values for the same hash key. In our example, we store values such as ProductOrigin, WebsiteURL, ProductImageURL for each “Product ID”.


import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.XmlUtil;
def Message processData(Message message) {
    def xmlBlocks = []
    def xmlErrors = []
    Map<String,List> map1  = new HashMap<>();
	String ProductID =  '';
	String ProductOrigin =  '';
	String WebsiteURL =  '';
	String ProductImageURL =  '';
    //Body 
    def body = message.getBody(String.class) as String;
	body.split('HTTP/1.1').each{ block ->
		if ( block.substring(1,4) == '200' && block.lastIndexOf('>')!= -1 && block.indexOf('<')!= -1){
			xmlBlocks <<  block.substring(block.indexOf('<'), block.lastIndexOf('>')+1)
		}
		if ( block.substring(1,4) == '404' && block.lastIndexOf('>')!= -1 && block.indexOf('<')!= -1){
			xmlErrors <<  block.substring(block.indexOf('<'), block.lastIndexOf('>')+1)
		}
	}
 	xmlBlocks.each{ xmlBlock ->
		def pasedXml=new XmlSlurper(false,false).parseText(xmlBlock)
		ProductID =  pasedXml.'**'.findAll { it.name() == 'd:ProductID' }
		ProductID = ProductID.replaceAll("\\[|\\]","")
		ProductOrigin= pasedXml.'**'.findAll { it.name() == 'd:ProductOrigin' }
		ProductOrigin = ProductOrigin.replaceAll("\\[|\\]","")
		WebsiteURL= pasedXml.'**'.findAll { it.name() == 'd:WebsiteURL' }
		WebsiteURL = WebsiteURL.replaceAll("\\[|\\]","")
		ProductImageURL= pasedXml.'**'.findAll { it.name() == 'd:ProductImageURL' }
		ProductImageURL = ProductImageURL.replaceAll("\\[|\\]","")
		map1[ProductID] = [Objects.toString(ProductOrigin, ""), Objects.toString(WebsiteURL, ""), Objects.toString(ProductImageURL, "")]
	}
	message.setProperty("ProductsUUIDMAP", map1);	
	def str = map1.inspect()
    message.setProperty("ProductsUUIDMAP_str", str);
    message.setBody('');
    return message;
}


As we used the Multicast step to process the message in two branches, we will bring the branches together using the Join and Gather elements as shown in the picture below.



The next step will be adding the missing links to images to the products. This step is done using the Groovy script “Modify XML”. First, we will read the message body using XmlSluper, then we will filter out all products from the offers, fetch the URLs from the HashMap and add them to the products.


The „Modify XML“ is shown below :

import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.util.slurpersupport.Node;
import groovy.util.slurpersupport.NodeChild;
import groovy.xml.XmlUtil;
def Message processData(Message message) {
    def body = message.getBody(java.lang.String) as String
    //Properties 
    def prop = message.getProperties();
    def map1 = prop.get("ProductsUUIDMAP");
    def pp_xml = new XmlSlurper().parseText(body)
  	def products = pp_xml.'**'.findAll{it.name() == 'results2'}
    def count = 0;
	products.each { val ->
	   	def	WebsiteURL ="";
	    def	ProductImageURL ="";
	    def prod_key = val.Product.text().replaceFirst ("^0*", "") + val.ProductUnit.text();
		val.children().each { pit ->
			if (pit.name() == 'Product') {
				def x = map1.find{ it.key == prod_key }?.value   //replaceFirst("^0+(?!$)", "")  pit.text()
				if(x) {
					WebsiteURL =  x[1];
					ProductImageURL =  x[2];
                    count = count + 1;
				}
			}
		}
		val.appendNode {
			'ImageURL'(ProductImageURL);
			'WebURL'(WebsiteURL);		
		}		
	}
   	def xml_text= XmlUtil.serialize(pp_xml)
	message.setBody(xml_text.toString());
    return message;
}


If we invoked the iFlow at this point in time, we could see the following payload enriched with the OfferContent element.

In the following step, we will use the splitter which enables a single message to be split into multiple partial messages that can be processed individually. The splitter is often used in cases when the external system sends large messages containing hundreds of thousands or millions of records and there is an assumption that the same amount of records will be sent to the receiver system. In such cases, a situation may arise when the receiver system will not be able to process such large message and then performance problems or memory consumption issues may occur. One of the solutions is to use the splitter to create individual messages to be processed.



In the next part of the processing, we will focus on the mapping functionality. We will work on the XML transformation, where we have to provide the XML schema of the source and the target message by uploading respective XSD files. 

However, we have to make one additional step in the data processing by adding two attributes related to the communication channels to each offer. In our case, the communication channels will be 'EMAIL' and ‘ONLINE_SHOP‘.

The “Groovy Script 1” that performs these steps, is shown below:


import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.util.slurpersupport.Node;
import groovy.util.slurpersupport.NodeChild;
import groovy.xml.XmlUtil;
def Message processData(Message message) {
    def body = message.getBody(java.lang.String) as String;
    def pp_xml = new XmlSlurper().parseText(body)
  	def offers = pp_xml.'**'.findAll{it.name() == 'results'}
	offers.each { val ->
	    String s_id = val.OfferProducts[0].results2[0].Product[0].text();
		String i_key = val.MarketingOffer[0].text() + s_id.replaceFirst ("^0*", "") + val.OfferProducts[0].results2[0].ProductUnit[0].text();
		String s_weburl = val.OfferProducts[0].results2[0].WebURL[0].text();
		String s_imgurl = val.OfferProducts[0].results2[0].ImageURL[0].text();
		String s_price = val.OfferProducts[0].results2[0].ProductPrice[0].text();
		val.appendNode {
			'YY1_PromotionPrice_MOF'(s_price);
		}
	    val.'OfferEndDateTime' + {
			'OfferContents' { 
				'results3' {'CommunicationMedium'('EMAIL') + 'OfferIdExt'(i_key) + 'OfferContentSourceURL'(s_imgurl) + 'OfferContentTargetURL'(s_weburl) } + 
				'results3' { 'CommunicationMedium'('ONLINE_SHOP') + 'OfferIdExt'(i_key) + 'OfferContentSourceURL'(s_imgurl) + 'OfferContentTargetURL'(s_weburl)  } }
		}
		val.MarketingOffer.replaceNode {
			'MarketingOffer'( i_key );
		}
		val.OfferProducts.results2.each { i_product ->
			i_product.appendNode {
				'OfferIdExt'(i_key)								
			}				
		}
		val.OfferMarketingLocations.results1.each { i_location ->
			i_location.appendNode {
				'OfferIdExt'(i_key)								
			}
		}
	}
	def xml_text = XmlUtil.serialize((groovy.util.slurpersupport.GPathResult)pp_xml)
	message.setBody(xml_text.toString());    
    return message;
}


In the snippet below, there is a part of the payload showing one particular offer and two communication media, EMAIL and ONLINE_SHOP, created using the groovy script above.


<OfferContent>
     <OfferContent>
           <OfferContentTypeName>Image</OfferContentTypeName>
           <CommunicationMediumName>Email</CommunicationMediumName>
           <OfferContentType>01</OfferContentType>
           <OfferIdOrigin>ERP_EXT_OFFER</OfferIdOrigin>
           <CommunicationMedium>EMAIL</CommunicationMedium>
           <OfferContentPosition>TOP</OfferContentPosition>
           <OfferContentSourceURL>https://images-na.ssl-images-amazon.com/images/I/51tbABf6XKL._SX522_.jpg</OfferContentSourceURL>
           <OfferContentSourceURLDesc></OfferContentSourceURLDesc>
           <LanguageISOCode>EN</LanguageISOCode>
           <MarketingOfferContent>00001</MarketingOfferContent>
           <OfferContentTargetURL>https://www.amazon.com/Organic-Espresso-Bean-Coffee-5-Pound/dp/B002GWHAVM?th=1</OfferContentTargetURL>
           <MarketingOfferContentUUID>00163e59-95a9-1ed9-a89e-fdd916cb99dd</MarketingOfferContentUUID>
           <OfferContentTargetURLDesc></OfferContentTargetURLDesc>
           <OfferIdExt>11ECFOCHTUM10536665EA</OfferIdExt>
     </OfferContent>
     <OfferContent>
           <OfferContentTypeName>Image</OfferContentTypeName>
           <CommunicationMediumName>Online-Shop</CommunicationMediumName>
           <OfferContentType>01</OfferContentType>
           <OfferIdOrigin>ERP_EXT_OFFER</OfferIdOrigin>
           <CommunicationMedium>ONLINE_SHOP</CommunicationMedium>
           <OfferContentPosition>TOP</OfferContentPosition>
           <OfferContentSourceURL>https://images-na.ssl-images-amazon.com/images/I/51tbABf6XKL._SX522_.jpg</OfferContentSourceURL>
           <OfferContentSourceURLDesc></OfferContentSourceURLDesc>
           <LanguageISOCode>EN</LanguageISOCode>
           <MarketingOfferContent>00001</MarketingOfferContent>
           <OfferContentTargetURL>https://www.amazon.com/Organic-Espresso-Bean-Coffee-5-Pound/dp/B002GWHAVM?th=1</OfferContentTargetURL>
           <MarketingOfferContentUUID>00163e59-95a9-1ed9-a89e-fdf5b5bd99dd</MarketingOfferContentUUID>
           <OfferContentTargetURLDesc></OfferContentTargetURLDesc>
           <OfferIdExt>11ECFOCHTUM10536665EA</OfferIdExt>
     </OfferContent>
</OfferContent>


At this moment, we have all the data ready and we can move on to the next step, which is mapping. But before that, we will make one additional step where we remove the namespace prefix from source payload. For this, we will use the XSLT element.


<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/
XSL/Transform">
    <xsl:output indent="yes" method="xml" encoding="utf-8" omit-
xml-declaration="no"/>
<!-- Stylesheet to remove all namespaces from a document -->
<!-- template to copy elements -->
    <xsl:template match="*">
        <xsl:element name="{local-name()}">
            <xsl:apply-templates select="@* | node()"/>
        </xsl:element>
    </xsl:template>
<!-- template to copy attributes -->
    <xsl:template match="@*">
        <xsl:attribute name="{local-name()}">
            <xsl:value-of select="."/>
        </xsl:attribute>
    </xsl:template>
    <!-- template to copy the rest of the nodes -->
    <xsl:template match="comment() | text() ">
        <xsl:copy/>
    </xsl:template>
</xsl:stylesheet>


Step 3: Mapping and Importing Offers

The last step to import offers into SAP Marketing Cloud, is the mapping which transfers data into the required format. Here, we must provide the XML schema of the source and the target message by uploading respective XSD (for a source message) and EDMX (for a target message) files to SAP CPI. There are free online tools which will take an XML instance document and output a corresponding XSD schema.

Once we upload the XSD file of the source message to our iFlow, we will generate an EDMX file which stores the schema of the entities encapsulated in the OData service, including their fields and relationships. In our case, the CUAN_OFFER_IMPORT_SRV OData service will be used.

To invoke the OData service, we will implement a Request-Reply pattern scenario.



To invoke an external OData source, we need to configure several parameters. In order to get the EDMX file, we will use the Query Editor to create the correct the OData service endpoint when connecting to the service provider. In other words, the Query Editor is used to model the access to the OData source



After clicking on the Step 2 button, the Query Editor connects to the service and retrieves its metadata information. From the Fields list, we will select the required fields by checking their respective checkboxes. This information is again retrieved from the service’s metadata information.



After finishing this step, you’ve completed the configuration of the OData Adapter. After that, the EDMX file is automatically created and added to the Resources tab in our iFlow. This file will be used for the mapping step in our iFlow. The source message for the mapping step is the structure defined in the automatically generated XSD file, and the target message is described in the generated EDMX file.



Once the configuration of the mapping activity is completed, you can now save, deploy, and run your new integration flow. What the imported offer looks like is shown in the picture below:



Conclusion

In this article, we specified how to use AWS S3 to read and/or delete an object stored in an S3 bucket. We described one of the methods on how to get additional data by an external call during the processing. We showed you how to make one batch call to fetch all required data instead of making thousands of individual calls. We also demonstrated how to use the XSLT mapping capability to generate OData request.