The microservices revolution has transformed how modern applications are built. But can a platform born in the era of monolithic applications successfully participate in a microservices architecture?
Yes — but only if you stop trying to force IBM i into a cloud-native mold. Most microservices initiatives fail not because of technology, but because of over-fragmentation and a fundamental misunderstanding of IBM i's role. This article cuts through the hype and focuses on what actually survives in production environments.
IBM i is at its strongest when it remains the system of record and business logic engine. Your decades of tested, reliable RPG code does not need to be rewritten — it needs to be exposed as services.
Running containers on IBM i is technically possible but rarely practical. Limited container ecosystem, resource overhead, and licensing complexity make it the wrong tool for the job. Keep IBM i running native applications and run your containers on Linux — integrating via APIs.
Batch jobs that run once a day, tightly coupled business processes, and legacy code that simply works and is not changing — none of these are microservice candidates.
A single API gateway in the cloud or DMZ serves as the entry point for all traffic, routing to either cloud services or IBM i services behind the firewall. This provides a security boundary, centralized authentication, rate limiting, and circuit breaker protection for IBM i services.
IBM i publishes events to a message bus. Downstream services subscribe and react independently. IBM i never needs to know who is consuming its events — making this the cleanest decoupling pattern available.
Three implementation options exist depending on your environment and tooling preferences.
CREATE TRIGGER ORDER_CHANGE
AFTER INSERT ON ORDERS
FOR EACH ROW MODE DB2ROW
BEGIN ATOMIC
CALL PUBLISH_EVENT('order.created', NEW.ORDER_ID);
END
// RPG writes event to data queue
QSNDDTAQ DTAQ('ORDEREVENTS') DTAQLIB('MYLIB') DATA(eventData);
// Separate Node.js or Python service monitors the queue
// and publishes events to Kafka or RabbitMQ
For Option C, CDC tools such as IBM InfoSphere CDC, Precisely Connect, or Debezium capture database changes and stream them directly to your message bus — no application code changes required.
Each frontend (mobile, web, partner API) gets its own dedicated BFF service that aggregates multiple IBM i calls, transforms data into the right shape, and handles caching. This dramatically reduces the number of round trips to IBM i and keeps response payloads lean.
// Mobile BFF — parallel calls, minimal response payload
app.get('/api/mobile/order/:id', async (req, res) => {
const order = await ibmiService.getOrder(req.params.id);
// Fetch related data in parallel — not sequentially
const [customer, items] = await Promise.all([
ibmiService.getCustomer(order.customerId),
ibmiService.getOrderItems(order.id)
]);
// Return only what mobile needs — nothing more
res.json({
id: order.id,
status: order.status,
total: order.total,
customerName: customer.name,
itemCount: items.length
});
});
// Web BFF — same IBM i calls, richer response
app.get('/api/web/order/:id', async (req, res) => {
const order = await ibmiService.getOrder(req.params.id);
const [customer, items, history] = await Promise.all([
ibmiService.getCustomer(order.customerId),
ibmiService.getOrderItems(order.id),
ibmiService.getOrderHistory(order.id)
]);
res.json({ order, customer, items, history });
});
dcl-proc getCustomer;
dcl-pi *n;
request likeds(IL_REQUEST);
response likeds(IL_RESPONSE);
end-pi;
dcl-s customerId varchar(10);
dcl-s customerData varchar(1000);
customerId = il_getPathParameter(request: 'id');
exec sql SELECT JSON_OBJECT(
'id': CUST_ID,
'name': CUST_NAME,
'email': CUST_EMAIL
) INTO :customerData
FROM CUSTOMERS
WHERE CUST_ID = :customerId;
il_responseWrite(response: customerData);
il_responseWriteStatus(response: 200);
end-proc;
const { Connection, Statement } = require('idb-connector');
app.get('/api/customers/:id', async (req, res) => {
const conn = new Connection({ url: '*LOCAL' });
conn.connect();
const stmt = new Statement(conn);
const result = await stmt.exec(
'SELECT * FROM CUSTOMERS WHERE CUST_ID = ?',
[req.params.id]
);
res.json(result[0]);
conn.close();
});
Best for quick prototypes and simple CRUD operations in organizations already standardized on IBM tooling.
| Component | Recommended Options | Best For |
|---|---|---|
| Message Broker | Apache Kafka, RabbitMQ, AWS SQS | Event streaming, async processing |
| API Gateway | Kong, AWS API Gateway, Apigee | Auth, rate limiting, routing |
| Service Mesh | Istio, Linkerd, Consul | Advanced traffic, observability |
| Monitoring | Prometheus + Grafana, ELK Stack | Metrics, logging, alerting |
Teams try to containerize IBM i applications or move them to cloud VMs without architectural changes. IBM i licensing does not work well in cloud environments, network latency causes performance issues, and costs spiral out of control.
The right approach: Keep IBM i on-premise or in dedicated hosting. Expose services via APIs. Move new functionality to the cloud. Gradually reduce IBM i footprint over time as appropriate.
Every program becomes a microservice. You end up with hundreds of tiny services, explosive operational complexity, and network overhead that destroys performance.
// getCustomerName service
// getCustomerAddress service
// getCustomerPhone service
// getCustomerEmail service
// Four network calls where one would do
// Customer Service — handles all customer operations
// GET /customers/:id → returns full customer object
// PUT /customers/:id → updates customer
// POST /customers → creates customer
// One service, one network call, all customer data
Critical reality: A distributed monolith is worse than a regular monolith — all the complexity, none of the benefits.
Data gets split across services without a proper consistency strategy. Business rules get violated. Reconciliation becomes a full-time job.
Keep transactional data on IBM i. Use eventual consistency only for non-critical data. Have a single clear source of truth for every data entity. When distributed transactions are unavoidable, implement the Saga pattern.
// Order Creation Saga
// Step 1: Create order (IBM i) ✓
// Step 2: Reserve inventory (IBM i) ✓
// Step 3: Process payment (Cloud) ✗ FAILS
// Step 4: Compensate → Release inventory (IBM i)
// Step 5: Compensate → Cancel order (IBM i)
// System returns to consistent state automatically
APIs are exposed without proper security, or security is bolted on after the fact. The result is data breaches, compliance violations, and audit failures.
Implement security from day one: OAuth 2.0 / JWT for authentication, API keys for service-to-service communication, rate limiting and throttling at the gateway, and audit logging for all access. Each layer must be independent — if one fails, the others still protect.
Opening IBM i connections is expensive. Never open and close a connection per request.
app.get('/api/data', (req, res) => {
const conn = new Connection();
conn.connect();
// use connection
conn.close(); // Expensive open/close on every request
});
const pool = new ConnectionPool({ size: 10 });
app.get('/api/data', async (req, res) => {
const conn = await pool.acquire();
try {
// use connection
} finally {
pool.release(conn); // Always release back to pool
}
});
| Category | Metric | Why It Matters |
|---|---|---|
| Response Time | p50, p95, p99 | Identifies tail latency affecting real users |
| Throughput | Requests per second | Capacity planning and scaling decisions |
| Error Rate | % failed requests | Early warning for downstream failures |
| IBM i System | CPU, memory, job queue depth | Prevents IBM i from becoming the bottleneck |
| Connection Pool | Pool utilization % | Detects connection exhaustion before it causes failures |
Inventory existing IBM i applications. Identify service candidates. Map dependencies. Define bounded contexts. Choose your technology stack. Deliver a service catalog, architecture diagram, and risk assessment.
Set up API gateway. Implement authentication. Create one pilot service — choose something low-risk but valuable such as a customer lookup or order status inquiry. Establish monitoring, logging, and documentation standards.
Expose additional services prioritized by business value. Implement event-driven patterns. Build BFF layers. Migrate frontends to use APIs. Decommission old integration interfaces.
Performance tuning and cost optimization. Service consolidation or splitting based on real usage data. Implement advanced patterns — circuit breakers, bulkheads, retries. Continuous improvement.
The challenge: Provide mobile access for the sales team, integrate with a cloud-based CRM, enable real-time inventory visibility, and support B2B partner integrations — without disrupting the IBM i systems running the business.
Key success factors: Started with a single pilot service. Kept core business logic on IBM i. Used proven, well-supported technologies. Incremental rollout with strong monitoring from day one.
Before committing to microservices, understand when they are the wrong choice. They are not a universal solution — and for IBM i environments, the bar for adopting them should be high.
Microservices are the wrong choice when you have a single small team (the operational overhead is unsustainable), when operations must be atomic across multiple entities (distributed transactions are complex and error-prone), when your code changes quarterly or annually (the overhead is not justified), when you lack CI/CD pipelines and automated deployment, or when you cannot clearly define service boundaries — in which case you will create a distributed monolith, which is worse than a regular one.
The goal is not to modernize IBM i. The goal is to modernize how IBM i participates in your architecture.
Your IBM i system has decades of battle-tested business logic. Do not throw that away. Give it a modern interface and let it continue doing what it does best while you build new capabilities around it.
Microservices are a means to an end — not the end itself. The goal is to deliver business value faster, more reliably, and more efficiently. If wrapping your IBM i applications in a well-designed service layer achieves that, you have succeeded.