Wednesday, April 29, 2026

Handling JSON Data in OmniScript File Uploads

 While testing File Upload functionality in Salesforce OmniScript, an important limitation was observed during debugging and data handling.

Key Observation

When testing an OmniScript with File Upload, the JSON data is not directly visible in Debug mode. This can be challenging when you need to extract specific attributes from the uploaded file data for use in subsequent OmniScript steps.

Additionally:

  • %Data% and %Context% do not work directly when placed in a Text Block after the OmniScript is activated.
  • Unlike during preview or debug, activated OmniScripts do not automatically expose the full JSON payload for display.



Solution: Use Data Mapper (Transform Action)

To make the required attributes available for downstream processing or display:

  • Use a Data Mapper – Transform Action
  • Explicitly map the required attributes from the file upload JSON into output nodes
  • These mapped outputs can then be referenced reliably in:
    • Subsequent OmniScript elements
    • Integration Procedures
    • Text Blocks or UI components

This approach ensures the needed data is surfaced in a controlled and supported way, instead of relying on raw %Data% or %Context%.


Helpful Reference

Salesforce documentation that explains this behavior and recommended approach in detail:
๐Ÿ”— Salesforce Help Article
https://help.salesforce.com/s/articleView?id=000391069&type=1

PortQry to check if server is accpting the traffic; Use Case: AWS Machines Unable to Access an Application Link

 



The use case was AWS machines were not able to open a link. We had to request Network Firewall team to enable few subnets and whitelisted to access the link. 

Here’s a corrected, clear, and blog‑ready version with professional flow and concise explanation:


Use Case: AWS Machines Unable to Access an Application Link

In this use case, AWS-hosted machines were unable to open a required application link. Initial investigation showed that the application itself was available, but requests from the AWS environment were not reaching the target server.

The issue was identified as a network firewall restriction. The Network/Firewall team was engaged, and specific AWS subnets were enabled and whitelisted to allow outbound access to the application URL. Once the firewall rules were updated, connectivity was restored.

To validate connectivity during troubleshooting, PortQry was used to check whether the target server was accepting traffic on the required port.

PortQry Usage

PortQry is a command-line utility used to verify network connectivity to a specific server and port.

Purpose:

  • Confirm whether the target server is reachable
  • Identify whether traffic is allowed, blocked, or filtered by a firewall

Example:

portqry.exe -n <target-server> -e <port> -p TCP

If the result shows FILTERED, it typically indicates that a firewall or network security device is blocking the traffic, confirming the need for Network/Firewall team intervention.

Tuesday, January 6, 2026

DB2 ODBC connection Properties; UiPath RPA

Error while executing DB Query-ERROR [IM003] Specified driver could not be loaded due to system error 1114: A dynamic link library (DLL) initialization routine failed. (IBM DB2 ODBC DRIVER - DB2COPY1, C:\PROGRA~1\IBM\SQLLIB\BIN\DB2CLIO.DLL). 


Below fixed the issue: 

Connection String Format: 


Provider=IBMOLEDB.DB2COPY1;Data Source=DB2B;Password={0};User ID={1}







Thursday, September 25, 2025

Managing Transactions in Salesforce: Why Only One Can Be in Working Status



When working with transactions in Salesforce, especially in custom implementations or integrations involving bots or external systems, it's important to understand a key constraint:

At any given time, only one transaction can be in "Working" status.

This design ensures data integrity and prevents conflicts during concurrent updates. However, it also means that if you attempt to initiate a new transaction while another is still active, Salesforce will block the operation until the previous one is resolved.

How to Handle This Situation

If you find yourself unable to proceed with a new transaction, follow these steps:

  1. Identify Active Transactions
    Navigate to the transaction records and look for any entries marked as "Working."

  2. Click into Each Active Transaction
    Open each transaction individually to review its status and details.

  3. Cancel and Discard Changes
    Use the Cancel Transaction and Discard Changes options to terminate the active transaction. This will release the lock and allow you to proceed with a new one.

  4. Retry Your New Transaction
    Once all previous transactions are cleared, you can initiate your new transaction without issues.

Why This Matters

Failing to cancel previous transactions can lead to:

  • Errors in automation flows
  • Incomplete data updates
  • Conflicts in record locking
  • Frustration for users and bots alike

By maintaining a clean transaction state, you ensure smoother operations and better system performance.

Tuesday, September 2, 2025

๐Ÿ”ง How to Open a .nupkg File by Renaming It to .zip (And What to Do If It Doesn’t Work)



If you've ever worked with NuGet packages, you've likely come across files with the .nupkg extension. These are essentially ZIP archives that contain compiled code, metadata, and other resources used in .NET projects. But what if you want to peek inside?

✅ The Quick Tip

You can rename a .nupkg file to .zip and open it like any regular archive. For example:

MyPackage.nupkg → MyPackage.zip

Then, just double-click to explore its contents.


๐Ÿ› ️ What to Do If Renaming Doesn’t Work

Sometimes, simply renaming the file doesn’t seem to do the trick. Here are a few things to check:

1. Make Sure File Extensions Are Visible

Windows hides known file extensions by default. This can lead to mistakes like renaming MyPackage.nupkg to MyPackage.zip.nupkg.

Fix it:

  • Open File Explorer
  • Go to the View tab
  • Check File name extensions

2. Use a Zip Tool Directly

Instead of renaming, right-click the .nupkg file and choose:

  • Open with → 7-Zip
  • Open with → WinRAR
  • Or extract it using the Windows built-in zip extractor

Wednesday, August 20, 2025

๐Ÿš€ Boosting API Testing with Pre-request and Post-response Scripts in Postman

As I’ve been working on API development and testing, I recently explored how to use Pre-request and Post-response scripts in Postman to automate and streamline my workflow. Here’s what I learned and how you can apply it too!


๐Ÿ”ง Pre-request Script: Dynamic Timestamping

I needed to send a request with a timestamp that was exactly 5 hours before the current time. Here’s the script I used:

let now = new Date();
let hour = 60 * 60 * 1000;
let multiplyer = 5;

let time_iso = new Date(now.getTime() - multiplyer * hour);
pm.environment.set("time_iso", time_iso.toISOString());

✅ This script calculates the time offset and stores it in an environment variable time_iso, which I then used in the request body like this:

{
  "timestamp": "{{time_iso}}"
}


๐Ÿ” Authorization Setup

To authenticate the request, I used a Bearer Token in the Authorization tab:

  • Type: Bearer Token
  • Token: {{auth_token}} (stored in environment variables)

Alternatively, you can add it manually in the Headers tab:

Key: Authorization
Value: Bearer {{auth_token}}

๐Ÿงช Post-response Script: Status Check and Token Extraction

After the request, I wanted to validate the response and extract a token for future use. Here’s the script I used:

pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});

let responseData = pm.response.json();

if (responseData.token) {
    pm.environment.set("auth_token", responseData.token);
    console.log("Token saved to environment:", responseData.token);
} else {
    console.warn("Token not found in response");
}

✅ This script checks if the response was successful and saves the token to an environment variable for reuse.


๐Ÿ’ก Final Thoughts

Using these scripts has made my API testing more dynamic and efficient. I can now automate timestamp generation, handle authentication seamlessly, and extract data from responses without manual effort.

If you're working with APIs in Postman, I highly recommend exploring these scripting features—they’re simple to implement and incredibly powerful!