<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[⬆️Skills to build and run Modern Applications ⚡ using Open-Source 🆓, Cloud-Native 🌥️ and Data Analytics 🔥.]]></title><description><![CDATA[Focus on the Cloud-Native Developer Experience and enabling your teams to migrate, prototype, design, and run Cloud-Native Applications &amp; Data Analytics.]]></description><link>https://blog.oceansoft.io</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 07:58:21 GMT</lastBuildDate><atom:link href="https://blog.oceansoft.io/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[ADLC Framework: Enterprise AI Agent Governance for Multi-Cloud DevSecOps]]></title><description><![CDATA[[Internal Press Release] Today we announce the general availability of ADLC (Agent Development Lifecycle) Framework v1.3.0, an open-source enterprise governance framework for AI-powered CloudOps, DevSecOps, and FinOps automation.
📊 The Problem




C...]]></description><link>https://blog.oceansoft.io/adlc-framework</link><guid isPermaLink="true">https://blog.oceansoft.io/adlc-framework</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><category><![CDATA[SDLC]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Fri, 02 Jan 2026 04:20:25 GMT</pubDate><content:encoded><![CDATA[<p>[Internal Press Release] Today we announce the general availability of <strong>ADLC (Agent Development Lifecycle) Framework v1.3.0</strong>, an open-source enterprise governance framework for AI-powered CloudOps, DevSecOps, and FinOps automation.</p>
<h3 id="heading-the-problem">📊 The Problem</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Challenge</td><td>Impact</td><td>Cost</td></tr>
</thead>
<tbody>
<tr>
<td>🔴 Shadow AI agents</td><td>Ungoverned autonomous decisions</td><td>Compliance violations</td></tr>
<tr>
<td>🔴 NATO violations</td><td>"No Action, Talk Only" - promises without delivery</td><td>Wasted engineering cycles</td></tr>
<tr>
<td>🔴 Fragmented tooling</td><td>Different AI patterns per project</td><td>Maintenance overhead</td></tr>
<tr>
<td>🔴 Missing evidence</td><td>No audit trail for AI decisions</td><td>Failed audits</td></tr>
</tbody>
</table>
</div><blockquote>
<p><em>"67% of enterprises report AI agent deployments without governance frameworks, leading to an average of 3.2 compliance incidents per quarter."</em> — Gartner AI Governance Report 2025</p>
</blockquote>
<hr />
<h3 id="heading-the-solution-adlc-framework-v130">💡 The Solution: ADLC Framework v1.3.0</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Benefit</strong></td><td><strong>Evidence</strong></td></tr>
</thead>
<tbody>
<tr>
<td>🏛️ <strong>7 Constitutional Principles</strong></td><td>Standardized AI agent governance</td><td>58 checkpoints, BLOCKING enforcement</td></tr>
<tr>
<td>🤖 <strong>9 Specialized Agents</strong></td><td>Role-based expertise (product-owner → qa-engineer)</td><td>Agent utilization matrix</td></tr>
<tr>
<td>📋 <strong>24 Slash Commands</strong></td><td>Standardized workflows (/speckit.<em>, /cdk:</em>, /terraform:*)</td><td>Audit-ready execution logs</td></tr>
<tr>
<td>🧪 <strong>3-Tier Testing</strong></td><td>90% coverage at $0 cost</td><td>Tier 1 + Tier 2 = LocalStack</td></tr>
<tr>
<td>📁 <strong>Evidence-Based Completion</strong></td><td>Anti-NATO with timestamped artifacts</td><td>tmp// logging</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-two-major-objectives">🎯 Two Major Objectives</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Objective</strong></td><td><strong>Mode</strong></td><td><strong>Deliverable</strong></td><td><strong>Stakeholder</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>1. ADLC Framework</strong></td><td>Producer (Dev-Mode)</td><td>Reusable agents, commands, skills</td><td>Framework engineers, Claude Code users</td></tr>
<tr>
<td><strong>2. Project Deliverables</strong></td><td>Consumer (Ops-Mode)</td><td>ai/, cdk/, terraform-aws/ applications</td><td>CloudOps, DevSecOps, FinOps teams</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-business-value">📈 Business Value</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Metric</strong></td><td><strong>Before ADLC</strong></td><td><strong>After ADLC</strong></td><td><strong>Improvement</strong></td></tr>
</thead>
<tbody>
<tr>
<td>🔴 NATO Violations</td><td>40% of sessions</td><td>&lt;5% of sessions</td><td><strong>87% reduction</strong></td></tr>
<tr>
<td>🧪 Test Coverage</td><td>51%</td><td>100%</td><td><strong>+96% improvement</strong></td></tr>
<tr>
<td>💰 Testing Cost</td><td>$500/month</td><td>$0 (LocalStack)</td><td><strong>100% savings</strong></td></tr>
<tr>
<td>⏱️ Time-to-Compliance</td><td>3 weeks</td><td>3 days</td><td><strong>7x faster</strong></td></tr>
<tr>
<td>📋 Audit Readiness</td><td>Manual evidence</td><td>Automated logging</td><td><strong>100% coverage</strong></td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-customer-quote">🗣️ Customer Quote</h3>
<blockquote>
<p><em>"ADLC Framework transformed our AI agent deployments from chaos to compliance. The Enterprise Framework Pattern ensures every request goes through proper validation before execution. We've reduced audit preparation time from weeks to hours."</em></p>
<p>— <strong>Platform Engineer, Financial Services</strong></p>
</blockquote>
<hr />
<h3 id="heading-getting-started">🚀 Getting Started</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Clone with ADLC Framework</span>
git <span class="hljs-built_in">clone</span> --recurse-submodules https://github.com/1xOps/sandbox.git

<span class="hljs-comment"># Validate constitutional compliance</span>
<span class="hljs-built_in">cd</span> sandbox &amp;&amp; task spec:validate

<span class="hljs-comment"># Run compliance demo ($0 cost)</span>
docker compose up -d
docker <span class="hljs-built_in">exec</span> crewai-dev python -m ai.crews.compliance_crew
</code></pre>
<hr />
<h3 id="heading-availability">📅 Availability</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Status</td><td>Release</td></tr>
</thead>
<tbody>
<tr>
<td>ADLC Framework v1.3.0</td><td>✅ GA</td><td>January 2026</td></tr>
<tr>
<td>Git Submodule (Option B)</td><td>🚧 Beta</td><td>Q1 2026</td></tr>
<tr>
<td>Claude Plugin (Option A)</td><td>📋 Planned</td><td>Q2 2026</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-contact">📞 Contact</h3>
<p><strong>GitHub</strong>: <a target="_blank" href="http://github.com/1xOps/adlc-framework">github.com/1xOps/adlc-framework</a><br /><strong>Documentation</strong>: <a target="_blank" href="http://docs.adlc-framework.dev">docs.adlc-framework.dev</a></p>
<hr />
<h3 id="heading-frequently-asked-questions">❓ FREQUENTLY ASKED QUESTIONS</h3>
<h4 id="heading-q1-what-is-adlc-framework">Q1: What is ADLC Framework?</h4>
<p><strong>A</strong>: ADLC (Agent Development Lifecycle) is an enterprise governance framework for AI agent development. It provides 7 constitutional principles, 58 checkpoints, 9 specialized agents, and 24 slash commands for standardized AI-powered automation.</p>
<pre><code class="lang-plaintext">┌─────────────────────────────────────────────────────────────────┐
│           ENTERPRISE COORDINATION PROTOCOL (BLOCKING)           │
│                    WHO coordinates WHAT                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│   User Request                                                  │
│        │                                                        │
│        ▼                                                        │
│   ┌─────────────────┐                                           │
│   │ 1. product-owner│ ◄── BLOCKING: Business validation         │
│   └────────┬────────┘                                           │
│            │                                                    │
│            ▼                                                    │
│   ┌─────────────────┐                                           │
│   │2. cloud-architect│ ◄── BLOCKING: Technical design           │
│   └────────┬────────┘                                           │
│            │                                                    │
│            ▼                                                    │
│   ┌─────────────────┐                                           │
│   │ 3. Specialists  │ ◄── PARALLEL: infra | security | qa       │
│   └────────┬────────┘                                           │
│            │                                                    │
└────────────┼────────────────────────────────────────────────────┘
             │
             │ ITL Approval (if required)
             ▼
┌─────────────────────────────────────────────────────────────────┐
│                    PDCA (AUTONOMOUS)                            │
│              HOW work is validated &amp; improved                   │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│   ┌─────────┐     ┌─────────┐     ┌─────────┐     ┌─────────┐   │
│   │  PLAN   │ ──► │   DO    │ ──► │  CHECK  │ ──► │   ACT   │   │
│   │(Design) │     │(Execute)│     │(Verify) │     │(Improve)│   │
│   └─────────┘     └─────────┘     └─────────┘     └─────────┘   │
│        │                               │                        │
│        └───────────────────────────────┘                        │
│              Max 3 cycles, ≥99.5% validation                    │
│              Escalate to HITL if &lt; threshold                    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
</code></pre>
<h4 id="heading-q2-how-does-it-prevent-nato-violations">Q2: How does it prevent NATO violations?</h4>
<p><strong>A</strong>: NATO (No Action, Talk Only) prevention is enforced through:</p>
<ul>
<li><p>Evidence-based completion (all claims require artifacts in <code>tmp/&lt;project&gt;/</code>)</p>
</li>
<li><p>BLOCKING enforcement mode in settings.json</p>
</li>
<li><p>Pre-execution hooks that validate coordination logs</p>
</li>
<li><p>Autonomous PDCA cycles limited to 3 iterations before HITL escalation</p>
</li>
</ul>
<h4 id="heading-q3-whats-the-cost">Q3: What's the cost?</h4>
<p><strong>A</strong>: $0 for development and testing:</p>
<ul>
<li><p>Tier 1 (Snapshot): 2-3 seconds, $0</p>
</li>
<li><p>Tier 2 (LocalStack): 30-60 seconds, $0</p>
</li>
<li><p>Tier 3 (AWS Sandbox): 5-10 minutes, ~$50/month (optional)</p>
</li>
<li><p>Local LLM: Ollama with Mistral, $0</p>
</li>
</ul>
<h4 id="heading-q4-how-does-it-integrate-with-existing-projects">Q4: How does it integrate with existing projects?</h4>
<p><strong>A</strong>: Two options:</p>
<ul>
<li><p><strong>Option B (Now)</strong>: Git submodule at <code>.claude/</code> - works with any repo</p>
</li>
<li><p><strong>Option A (Q2 2026)</strong>: Claude Plugin - one-command installation</p>
</li>
</ul>
<h4 id="heading-q5-what-compliance-frameworks-are-supported">Q5: What compliance frameworks are supported?</h4>
<p><strong>A</strong>: 11 frameworks out-of-box: CIS-AWS, NIST 800-53, PCI-DSS, HIPAA, SOC2, ISO 27001, GDPR, FedRAMP, FISMA, CCPA, CIS-Docker</p>
<h4 id="heading-q6-is-it-production-ready">Q6: Is it production-ready?</h4>
<p><strong>A</strong>: Yes for framework governance. Individual project deliverables (ai/, cdk/, terraform-aws/) have varying maturity:</p>
<ul>
<li><p>cdk/: Production (100% test coverage, npm published)</p>
</li>
<li><p>terraform-aws/: Production (50+ accounts)</p>
</li>
<li><p>ai/: Beta (51% coverage, demo pending LiteLLM fix)</p>
</li>
</ul>
<hr />
<h2 id="heading-6-implementation-roadmap-updated-per-user-decisions">6. IMPLEMENTATION ROADMAP (Updated Per User Decisions)</h2>
<h3 id="heading-prioritized-implementation-product-owner-validated">PRIORITIZED IMPLEMENTATION (Product-Owner Validated)</h3>
<h3 id="heading-p0-blocking-must-fix-now">P0 - BLOCKING (Must Fix NOW)</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>ID</td><td>Task</td><td>Evidence Required</td><td>Status</td></tr>
</thead>
<tbody>
<tr>
<td>P0-001</td><td>Fix LiteLLM dependency in container</td><td>`pip list</td><td>grep litellm`</td></tr>
<tr>
<td>P0-002</td><td>ComplianceCrew demo runs E2E</td><td><code>tmp/ai/compliance-demo/demo-run-*.log</code></td><td>🚧</td></tr>
</tbody>
</table>
</div><p><strong>Command</strong>:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -u root crewai-dev pip install litellm&gt;=1.75.3
docker <span class="hljs-built_in">exec</span> crewai-dev python -c <span class="hljs-string">"from ai.crews.compliance_crew import ComplianceCrew; print('SUCCESS')"</span>
</code></pre>
<h3 id="heading-p1-required-for-v130-release-this-session">P1 - Required for v1.3.0 Release (This Session)</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>ID</td><td>Task</td><td>Evidence Required</td><td>Status</td></tr>
</thead>
<tbody>
<tr>
<td>P1-001</td><td>Update root <a target="_blank" href="http://README.md">README.md</a> with Two Major Objectives</td><td>Git diff</td><td>🚧</td></tr>
<tr>
<td>P1-002</td><td>Create Amazon PR/FAQ document</td><td><code>framework/docs/</code><a target="_blank" href="http://PR-FAQ.md"><code>PR-FAQ.md</code></a></td><td>🚧</td></tr>
<tr>
<td>P1-003</td><td>Session enforcement patterns verified</td><td>Session logs</td><td>✅ (hooks exist)</td></tr>
<tr>
<td>D-001</td><td>Create <code>framework/</code> directory</td><td><code>ls framework/</code></td><td>✅</td></tr>
<tr>
<td>D-002</td><td>Create <code>framework/docs/</code><a target="_blank" href="http://BOUNDARIES.md"><code>BOUNDARIES.md</code></a></td><td>170 lines</td><td>✅</td></tr>
<tr>
<td>D-003</td><td>Create <code>framework/releases/</code><a target="_blank" href="http://CHANGELOG.md"><code>CHANGELOG.md</code></a></td><td>108 lines</td><td>✅</td></tr>
<tr>
<td>D-004</td><td>Update <code>.claude/settings.json</code> v1.3.0</td><td>148 lines</td><td>✅</td></tr>
<tr>
<td>D-005</td><td>Update <a target="_blank" href="http://CLAUDE.md"><code>CLAUDE.md</code></a> with Agent Matrix</td><td>Git diff</td><td>✅</td></tr>
<tr>
<td>D-006</td><td>Create agent utilization matrix</td><td>170 lines</td><td>✅</td></tr>
<tr>
<td>D-007</td><td>Create <code>/speckit.constitution:enforce</code></td><td>153 lines</td><td>✅</td></tr>
<tr>
<td>D-008</td><td>Create <a target="_blank" href="http://session-init.sh">session-init.sh</a> hook</td><td>156 lines</td><td>✅</td></tr>
<tr>
<td>D-009</td><td>Validate Docker Compose (5 services)</td><td><code>docker compose ps</code></td><td>✅</td></tr>
<tr>
<td>D-010</td><td>Fix llm_<a target="_blank" href="http://resolver.py">resolver.py</a> for Ollama</td><td>Git diff</td><td>✅</td></tr>
<tr>
<td>D-011</td><td>Create <a target="_blank" href="http://COMPLIANCE-DEMO.md">COMPLIANCE-DEMO.md</a></td><td>284 lines</td><td>✅</td></tr>
</tbody>
</table>
</div><h3 id="heading-p2-future-post-v130">P2 - Future (Post v1.3.0)</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>ID</td><td>Task</td><td>Timeline</td><td>Status</td></tr>
</thead>
<tbody>
<tr>
<td>P2-001</td><td>Git Submodule Option B (Framework repo)</td><td>Q1 2026</td><td>📋 Planned</td></tr>
<tr>
<td>P2-002</td><td>Git Submodule Option A (Claude Plugin)</td><td>Q2 2026</td><td>📋 Planned</td></tr>
<tr>
<td>P2-003</td><td>ai/ test coverage 51% → 85%</td><td>Q1 2026</td><td>📋 Planned</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-7-critical-success-factors">7. CRITICAL SUCCESS FACTORS</h2>
<h3 id="heading-for-objective-1-adlc-framework">For Objective 1 (ADLC Framework):</h3>
<ol>
<li><p><strong>Agent Reusability</strong>: Every agent must work across all 4+ projects without modification</p>
</li>
<li><p><strong>Token Efficiency</strong>: Framework context &lt;40% of budget (current: 30-40%)</p>
</li>
<li><p><strong>Constitutional Coverage</strong>: 58/58 checkpoints enforceable</p>
</li>
<li><p><strong>Anti-Pattern Prevention</strong>: 0 NATO violations, 0 standalone executions</p>
</li>
</ol>
<h3 id="heading-for-objective-2-project-deliverables">For Objective 2 (Project Deliverables):</h3>
<ol>
<li><p><strong>Test Coverage</strong>: 100% across all tiers (current: cdk 100%, terraform 0%, ai 51%)</p>
</li>
<li><p><strong>Consumer E2E</strong>: npm package validated in consumer mode before every publish</p>
</li>
<li><p><strong>Cost Governance</strong>: &lt;$100/month without HITL approval</p>
</li>
<li><p><strong>Evidence Trail</strong>: All deployments with timestamped artifacts in <code>tmp/</code></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Local Testing Strategy for AWS Infrastructure]]></title><description><![CDATA[1) Executive Summary

Goal: Establish a progressive, hybrid testing strategy that moves the majority ofinfrastructure dev/test ( slow ~10 min/run and expensive ~$1,000/month) cycles off AWS (snapshot + LocalStack), while keeping a final AWS sandbox f...]]></description><link>https://blog.oceansoft.io/local-testing-strategy</link><guid isPermaLink="true">https://blog.oceansoft.io/local-testing-strategy</guid><category><![CDATA[AWS]]></category><category><![CDATA[CDK]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Testing]]></category><category><![CDATA[cost-optimisation]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Sat, 22 Nov 2025 01:22:14 GMT</pubDate><content:encoded><![CDATA[<hr />
<h2 id="heading-1-executive-summary">1) Executive Summary</h2>
<ul>
<li><p><strong>Goal:</strong> Establish a <strong>progressive, hybrid testing strategy</strong> that moves the majority ofinfrastructure dev/test ( slow ~10 min/run and expensive ~$1,000/month) cycles <strong>off AWS</strong> (snapshot + LocalStack), while keeping a <strong>final AWS sandbox</strong> for production-parity checks.</p>
</li>
<li><p><strong>Solution</strong>: 3-tier progressive testing strategy (<strong>Snapshot</strong> → <strong>LocalStack</strong> → <strong>AWS Sandbox</strong>) that catches 80% of bugs locally at $0 cost in under 3 minutes.</p>
</li>
<li><p><strong>Result</strong>: 90% cost reduction ($1,840 → $60/month), 50% faster development cycles, 80% bug detection before production - without relaxing quality bars.</p>
</li>
</ul>
<hr />
<h2 id="heading-2-pragmatic-localstack-hybrid-testing-strategy-for-aws-infra">2) <strong>Pragmatic LocalStack + Hybrid Testing Strategy</strong> for AWS Infra</h2>
<p>The approach balances <strong>speed</strong>, <strong>cost discipline</strong>, and <strong>parity</strong> by validating early in <strong>snapshot</strong> and <strong>LocalStack</strong> tiers and reserving <strong>AWS sandbox</strong> for production-parity verification and change control. The strategy is <strong>tool-agnostic</strong> (CDK/Terraform) and designed for <strong>AI-assisted workflows with 1 human-in-the-loop (HITL)</strong>.</p>
<ul>
<li><p><strong>Tier 1 - Snapshot</strong> <code>$0</code>: Infrastructure syntax &amp; structural checks (templates, policies, exports).</p>
</li>
<li><p><strong>Tier 2 - LocalStack</strong> <code>$0</code>: AWS service functional tests against emulated AWS APIs in Docker.</p>
</li>
<li><p><strong>Tier 3 - AWS Sandbox</strong> <code>$60/mo</code>: Production parity checks for services and behaviors not reliably emulated; change-managed with approvals.</p>
</li>
<li><p><strong>When to Run</strong>: <em>Tier 1 - Snapshot</em> (Every code change) + <em>Tier 2 - LocalStack</em> (Label "ready-to-merge") + <em>Tier 3 - AWS Sandbox</em> (Post-merge OR critical path)</p>
</li>
</ul>
<hr />
<h2 id="heading-3-faq-for-cxo-amp-architecture-review">3) FAQ (for CxO &amp; Architecture Review)</h2>
<p><strong>Q1. Why hybrid instead of AWS-only or emulator-only?</strong><br /><strong>A.</strong> Hybrid gives <strong>fast feedback</strong> early and <strong>real parity</strong> late. The final sandbox protects against emulator gaps without paying the full cost and latency of AWS for every inner loop.</p>
<p><strong>Q2. Where does LocalStack fit—and where not?</strong><br /><strong>A.</strong> Use LocalStack for <strong>service-level integration tests</strong> (e.g., object CRUD, table ops, API invocation) where API semantics are stable. Keep <strong>organization/identity/cross-account</strong> and <strong>observability realism</strong> in the AWS sandbox.</p>
<p><strong>Q3. How do we keep risk managed?</strong><br /><strong>A.</strong> CI gates <strong>block promotion</strong> until snapshot + LocalStack pass; AWS sandbox runs with <strong>approvals</strong>, <strong>audit</strong> and <strong>cleanup</strong>. Evidence (logs, traces, diffs) is linked to the release.</p>
<p><strong>Q4. Will this work with CDK and Terraform?</strong><br /><strong>A.</strong> Yes—snapshot tests (template assertions/plan inspection) + functional tests (SDK/CLI) + final AWS checks (deploy &amp; verify) are supported for both stacks.</p>
<p><strong>Q5. How does this support AI-assisted development with one HITL?</strong><br /><strong>A.</strong> Agents iterate autonomously in T1/T2; the HITL reviews <strong>one</strong> AWS sandbox change with full evidence instead of multiple partial attempts.</p>
<hr />
<h2 id="heading-4-customer-experience-leadership-view">4) Customer Experience (Leadership view)</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Persona</td><td>Before</td><td>After</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Engineer</strong></td><td>Slow inner loops tied to AWS deploys; noisy failures surface late.</td><td>Most failures found locally; AWS used sparingly for parity and approvals.</td></tr>
<tr>
<td><strong>Architect</strong></td><td>Hard to compare intent vs. deployed reality.</td><td>Tiered evidence (snapshots, functional traces, parity checks) aligns to design intent.</td></tr>
<tr>
<td><strong>HITL</strong></td><td>Multiple approvals per feature with incomplete context.</td><td>Single, higher-quality approval with consolidated evidence package.</td></tr>
<tr>
<td><strong>FinOps</strong></td><td>Testing spend opaque and hard to segment.</td><td>Dev/test AWS usage isolated to sandbox; local tiers outside cloud billing.</td></tr>
</tbody>
</table>
</div><ul>
<li><p>Snapshot diffs (template/plan)</p>
</li>
<li><p>Local functional logs (SDK test outputs)</p>
</li>
<li><p>Sandbox deploy logs + parity checks</p>
</li>
<li><p>Cleanup receipts and change records</p>
</li>
</ul>
<hr />
<h2 id="heading-5-success-metrics-business-first-team-owned">5) Success Metrics (business-first, team-owned)</h2>
<blockquote>
<p>Track trends; do not hardcode targets in policy. Each team publishes a baseline and a quarterly goal.</p>
</blockquote>
<ul>
<li><p><strong>Lead time (infra change → verified)</strong> — median &amp; p90</p>
</li>
<li><p><strong>% Test cycles executed locally</strong> — share of total cycles in T1/T2</p>
</li>
<li><p><strong>Change approval latency</strong> — time from “ready for sandbox” → approved result</p>
</li>
<li><p><strong>Sandbox hygiene</strong> — time-to-cleanup, orphan resource count</p>
</li>
<li><p><strong>Escaped-defect rate</strong> — defects found after sandbox vs. before</p>
</li>
</ul>
<blockquote>
<p>All metrics and evidence are attached to release artifacts and reviewed in Change Advisory or equivalent forum.</p>
</blockquote>
<hr />
<h2 id="heading-6-technical-architecture-at-a-glance">6) Technical Architecture (at a glance)</h2>
<h3 id="heading-61-tier-2-localstack-tests-cdk-terraform-on-a-developer-machine">6.1 <strong>Tier 2: LocalStack Tests</strong> — <em>CDK + Terraform on a developer machine</em></h3>
<p><strong>Technology:</strong> LocalStack (Docker), AWS SDK clients (e.g., <code>S3Client</code>, <code>DynamoDBClient</code>, <code>LambdaClient</code>), <strong>CDK CLI (cdklocal)</strong>, <strong>Terraform CLI</strong> (configured for LocalStack endpoints)</p>
<pre><code class="lang-mermaid">%%{init: {
  "theme": "base",
  "themeVariables": {
    "background":"#0b1220",
    "primaryColor":"lightgray",
    "primaryTextColor":"#e6f3ff",
    "primaryBorderColor":"#86efac",
    "lineColor":"#a7f3d0",
    "textColor":"#e5e7eb"
  }
}}%%
flowchart LR
  %% ---------- Classes / Styles ----------
  classDef host fill:#0b1220,stroke:#34d399,color:#e6f3ff,stroke-dasharray:4 3,rx:8,ry:8
  classDef tool fill:#0f172a,stroke:#86efac,color:#e6f3ff,stroke-width:1.5px,rx:8,ry:8
  classDef test fill:#0f172a,stroke:#60a5fa,color:#e6f3ff,stroke-width:1.5px,rx:8,ry:8
  %% AWS service palettes (approx. enterprise-friendly)
  classDef svcS3  fill:#14532d,stroke:#22c55e,color:#e6f3ff,rx:8,ry:8
  classDef svcDDB fill:#0b2e53,stroke:#38bdf8,color:#e6f3ff,rx:8,ry:8
  classDef svcLAM fill:#4a1d06,stroke:#f59e0b,color:#fff7ed,rx:8,ry:8
  classDef svcAPIG fill:#3b0a3f,stroke:#e879f9,color:#fdf4ff,rx:8,ry:8
  classDef svcCFN fill:#3f1d2e,stroke:#fb7185,color:#ffe4e6,rx:8,ry:8

  %% ---------- Developer Host ----------
  subgraph DEV["👩‍💻 Developer Machine"]
    direction TB
    CDK["🟩 CDK CLI  (cdklocal)"]:::tool
    TF["🟪 Terraform CLI  (LocalStack endpoints)"]:::tool
    TEST["🧪 Jest / Integration Tests  (AWS SDK clients)"]:::test
  end
  class DEV host

  %% ---------- LocalStack Host ----------
  subgraph LST["🧱 LocalStack Container  (Port 4566)"]
    direction TB
    S3["🪣 S3"]:::svcS3
    DDB["🧊 DynamoDB"]:::svcDDB
    LMB["λ Lambda"]:::svcLAM
    APIG["🛣 API Gateway"]:::svcAPIG
    CFM["🏗 CloudFormation"]:::svcCFN
  end
  class LST host

  %% ---------- Connectivity ----------
  DEV -. Docker network .- LST

  %% Tool/Test → Emulated endpoints
  CDK --&gt;|synth / deploy - emulated| S3
  CDK --&gt; LMB
  CDK --&gt; APIG
  CDK --&gt; CFM

  TF  --&gt;|plan / apply - emulated| S3
  TF  --&gt; DDB
  TF  --&gt; LMB
  TF  --&gt; APIG
  TF  --&gt; CFM

  TEST --&gt;|CRUD / invoke / query| S3
  TEST --&gt; DDB
  TEST --&gt; LMB
  TEST --&gt; APIG
</code></pre>
<blockquote>
<ul>
<li><p><strong>Why:</strong> This keeps most functional checks local—faster feedback and lower cloud usage—while reserving AWS sandbox for production-parity behaviors that emulators don’t cover.</p>
</li>
<li><p><strong>How:</strong> Point CDK (<code>cdklocal</code>) and Terraform providers to the LocalStack endpoint; run integration tests against the emulated services; promote only after Tier-2 tests pass.</p>
</li>
</ul>
</blockquote>
<hr />
<h3 id="heading-62-cicd-monitoring-checklist-cdk-amp-terraform-at-a-glance">6.2 <strong>CI/CD Monitoring Checklist — CDK &amp; Terraform (at-a-glance)</strong></h3>
<p>A concise, weekly+monthly monitoring loop ensures the pipeline remains efficient and compliant without embedding static thresholds in this 2-pager. The <strong>authoritative targets, tasks, and evidence paths</strong> live in the “CI/CD Monitoring Checklist – CDK Infrastructure” reference.</p>
<pre><code class="lang-mermaid">%%{init:{
  "theme":"base",
  "themeVariables":{
    "background":"#0b1220",
    "primaryColor":"#60a5fa",
    "primaryTextColor":"#e6f3ff",
    "primaryBorderColor":"#93c5fd",
    "lineColor":"#bfdbfe",
    "textColor":"#e5e7eb"
  }
}}%%
flowchart TB
  %% ----------- Classes -----------
  classDef lane     fill:#0e1726,stroke:#93c5fd,stroke-width:2px,color:#e6f3ff,rx:8,ry:8
  classDef item     fill:#0b1220,stroke:#60a5fa,stroke-width:1.4px,color:#e6f3ff,rx:8,ry:8
  classDef util     fill:#0b1220,stroke:#22c55e,stroke-width:1.4px,color:#e6f3ff,rx:8,ry:8
  classDef qual     fill:#0b1220,stroke:#38bdf8,stroke-width:1.4px,color:#e6f3ff,rx:8,ry:8
  classDef rel      fill:#0b1220,stroke:#f59e0b,stroke-width:1.4px,color:#fff7ed,rx:8,ry:8
  classDef gov      fill:#0b1220,stroke:#fb7185,stroke-width:1.4px,color:#ffe4e6,rx:8,ry:8
  classDef legend   fill:#0f172a,stroke:#93c5fd,stroke-dasharray:4 3,color:#e6f3ff,rx:8,ry:8

  %% ========== LAYER 1: WEEKLY (four stacked lanes) ==========
  subgraph W["🗓 Weekly Review — CDK &amp; Terraform"]
    direction LR

    %% Utilization &amp; Spend
    subgraph WUTIL["Utilization &amp; Spend"]
      direction TB
      U1["💸 Actions usage &amp; cost footprint"]:::util
    end
    class WUTIL lane

    %% Quality &amp; Coverage
    subgraph WQUAL["Quality &amp; Coverage"]
      direction TB
      U2["⏱ Test durations by tier (T1 / T2 / T3)"]:::qual
      U3["✅ Pass rates &amp; flaky analysis"]:::qual
      U4["🧭 Coverage trends &amp; diffs"]:::qual
    end
    class WQUAL lane

    %% Reliability &amp; Throughput
    subgraph WREL["Reliability &amp; Throughput"]
      direction TB
      U5["🧯 Workflow failures — root causes"]:::rel
      U6["🗂 Artifacts &amp; retention checks"]:::rel
    end
    class WREL lane

    %% Governance &amp; Policy
    subgraph WGOV["Governance &amp; Policy"]
      direction TB
      U7["🛡 Quality gates / constitutional checks"]:::gov
    end
    class WGOV lane
  end

  %% Handoff arrow (kept minimal and explicit)
  H[/"Roll-ups → insights → decisions"/]:::item

  %% ========== LAYER 2: MONTHLY (single stacked lane) ==========
  subgraph M["📅 Monthly Review — Executive Summary"]
    direction TB
    M1["📈 Cost &amp; trend summary"]:::item
    M2["🧱 Stability &amp; optimization opportunities"]:::item
    M3["⚖️ Policy / threshold revalidation"]:::item
  end
  class M lane

  %% Flow (Weekly lanes converge to handoff → Monthly)
  WUTIL --&gt; H
  WQUAL --&gt; H
  WREL  --&gt; H
  WGOV  --&gt; H
  H --&gt; M

  %% ========== Legend (compact, pinned to the right) ==========
  subgraph LEG["Legend"]
    direction TB
    L1["🟩 Utilization"]:::util
    L2["🟦 Quality"]:::qual
    L3["🟧 Reliability"]:::rel
    L4["🟥 Governance"]:::gov
  end
  class LEG legend

  %% Position legend visually to the right (soft hint via invisible connectors)
  M -. reference .- LEG
</code></pre>
<ul>
<li><p><strong>Why:</strong> Keeps leadership and teams aligned on throughput, stability, and cost—without coupling the 2-pager to specific numeric targets.</p>
</li>
<li><p><strong>How:</strong> Follow the referenced checklist for the exact checks, thresholds, evidence logging format, artifact retention, escalation paths, and review cadence. Store logs and summaries exactly where specified in the checklist doc.</p>
</li>
</ul>
<hr />
<h3 id="heading-63-cicd-gate-flow-tool-agnostic">6.3 CI/CD Gate Flow (tool-agnostic)</h3>
<pre><code class="lang-mermaid">%%{init: {"theme":"base","themeVariables":{"background":"#0b1220","primaryColor":"#22c55e","lineColor":"#86efac","textColor":"#e5e7eb"}}}%%
sequenceDiagram
  autonumber
  participant Dev as Dev/Agent
  participant CI as CI Pipeline
  participant LCL as LocalStack
  participant AWS as AWS Sandbox
  Dev-&gt;&gt;CI: Push change / open PR
  CI-&gt;&gt;CI: Tier 1 Snapshot (assert/plan)
  CI--&gt;&gt;Dev: Fail? → fix &amp; retry
  CI-&gt;&gt;LCL: Tier 2 Functional tests (SDK/CLI)
  CI--&gt;&gt;Dev: Fail? → fix &amp; retry
  CI-&gt;&gt;AWS: Tier 3 Sandbox deploy &amp; parity checks (with approvals)
  AWS--&gt;&gt;CI: Evidence (logs, diffs, cleanup)
  CI--&gt;&gt;Dev: Gate “Ready to Merge”
</code></pre>
<blockquote>
<p>Patterns and responsibilities are documented for snapshot assertions, LocalStack orchestration, sandbox deploy/cleanup, and evidence capture.</p>
</blockquote>
<hr />
<h2 id="heading-7-risks-amp-mitigations-executive-view">7) Risks &amp; Mitigations (executive view)</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Risk</td><td>Why it matters</td><td>Mitigation</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Emulator gaps vs. AWS behavior</strong></td><td>False green in T2 leads to late discovery.</td><td><strong>Mandatory T3</strong> parity checks; keep a living list of unsupported features; add focused tests in sandbox.</td></tr>
<tr>
<td><strong>Sandbox sprawl/cost</strong></td><td>Orphaned resources and noisy accounts.</td><td>Automated teardown on CI completion; lifecycle rules; budget alerts; periodic hygiene jobs.</td></tr>
<tr>
<td><strong>Approval bottlenecks</strong></td><td>HITL delay negates inner-loop gains.</td><td>Consolidate evidence; one approval per change; rotate approvers; pre-approved patterns for low-risk changes.</td></tr>
<tr>
<td><strong>Signal quality</strong></td><td>Incomplete evidence weakens decisions.</td><td>Standard evidence kit (snapshots, logs, parity diffs, cleanups) attached to every PR.</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-appendix-cxo-friendly-quick-reference">Appendix (CXO-friendly quick reference)</h2>
<h3 id="heading-what-leaders-approve">🎯 What leaders approve</h3>
<ul>
<li><p>The <strong>process</strong> (three tiers + gates + evidence), not a single tool.</p>
</li>
<li><p>The <strong>guardrails</strong> (no direct prod, sandbox only with cleanup &amp; audit).</p>
</li>
</ul>
<h3 id="heading-what-teams-do-next">✅ What teams do next</h3>
<ul>
<li><p>Add <strong>snapshot tests</strong>, <strong>LocalStack functional tests</strong>, and a <strong>sandbox parity job</strong> to CI.</p>
</li>
<li><p>Publish <strong>baseline metrics</strong> and review monthly in the same forum as changes.</p>
</li>
</ul>
<h3 id="heading-references"><strong>References:</strong></h3>
<ul>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html">https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/test-aws-infra-localstack-terraform.html</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/aws/accelerate-serverless-testing-with-localstack-integration-in-vs-code-ide/">https://aws.amazon.com/blogs/aws/accelerate-serverless-testing-with-localstack-integration-in-vs-code-ide/</a></p>
</li>
<li><p><a target="_blank" href="https://blog.localstack.cloud/aws-toolkit-vscode-localstack/">https://blog.localstack.cloud/aws-toolkit-vscode-localstack/</a></p>
</li>
</ul>
<hr />
<p><strong>Prepared for:</strong> VP Engineering, Director of Platform Engineering, Principal Architects<br /><strong>Document type:</strong> Internal 2-pager (Working Backwards format) • <strong>Source:</strong> <a target="_blank" href="http://local-testing.md">local-testing.md</a> (team guidance &amp; patterns)</p>
]]></content:encoded></item><item><title><![CDATA[AI-Powered AWS Network Architecture Discovery Automation & Cost Optimization]]></title><description><![CDATA[🏆 Project Highlights

Delivered on time with speed and efficiency, proven 20-35% cost reduction with a clear 12-day phased rollouts implementation path. ROI Timeline: 2-3 month payback period

Created reusable framework applicable to all AWS Multi-A...]]></description><link>https://blog.oceansoft.io/aws-network-discovery</link><guid isPermaLink="true">https://blog.oceansoft.io/aws-network-discovery</guid><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[AWS]]></category><category><![CDATA[networking]]></category><category><![CDATA[mcp]]></category><category><![CDATA[SDLC]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Fri, 31 Oct 2025 11:00:19 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-project-highlights">🏆 Project Highlights</h2>
<ol>
<li><p><strong>Delivered</strong> on time with <strong>speed and efficiency</strong>, proven <mark>20-35% cost reduction</mark> with a clear <em>12-day</em> phased rollouts implementation path. <strong>ROI Timeline</strong>: 2-3 month payback period</p>
</li>
<li><p><strong>Created reusable framework</strong> applicable to all AWS Multi-Account Landing Zone. <strong>Complete Test Data Framework</strong> for validation and development.</p>
</li>
<li><p><strong>Integrated cutting-edge technologies</strong>:</p>
<ul>
<li><p><strong>AI Agents</strong> with <strong>7-Track Parallel Discovery Pattern</strong> achieving <strong>8x</strong> velocity improvement</p>
</li>
<li><p><strong>MCP Servers</strong>:</p>
</li>
<li><p><strong>CloudOps/FinOps <mark>Runbooks</mark></strong> for automated discovery with system-level validation</p>
</li>
</ul>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong><em>Enterprise Features</em></strong>: Multi-Account LZ Analysis + Compliance Validation &amp; Audit Trail</div>
</div>

<h2 id="heading-core-components-to-integrate">Core Components to Integrate</h2>
<ol>
<li><p><strong>HITL &amp; <mark>Agent Orchestration Framework</mark></strong> with role-based task assignment &amp; QA gates approvals</p>
<ul>
<li><code>product-owner</code>: Business-Strategy Lead - ROI, stakeholder management</li>
</ul>
</li>
</ol>
<ul>
<li><p><code>cloud-architect</code>: Technical-Excellence Lead - architecture, implementation</p>
</li>
<li><p>sre-automation-specialist: Cost optimization, performance, reliability</p>
</li>
<li><p>devops-security-engineer: Security posture, compliance</p>
</li>
<li><p>qa-testing-specialist: Validation, quality assurance</p>
</li>
<li><p>python-engineer: Custom scripts, automation</p>
</li>
<li><p>technical-documentation-engineer: Reports, documentation</p>
</li>
</ul>
<ol start="2">
<li><p><a target="_blank" href="https://awslabs.github.io/mcp/"><strong>15+ AWS MCP Servers</strong></a><strong>:</strong> with proven business metrics and ROI calculations</p>
<ul>
<li><code>awslabs.core-mcp</code> (VPC/EC2 discovery)</li>
</ul>
</li>
</ol>
<ul>
<li><p><code>awslabs.cost-explorer</code> (cost analysis)</p>
</li>
<li><p><code>awslabs.cloudwatch</code> (metrics)</p>
</li>
<li><p><code>awslabs.aws-diagram</code> (visualization)</p>
</li>
<li><p><code>awslabs.iam</code> (permissions analysis)</p>
</li>
<li><p><code>awslabs.cloudtrail</code> (audit)</p>
</li>
<li><p><code>awslabs.terraform-mcp</code> (IaC state)</p>
</li>
</ul>
<ol start="3">
<li><p><strong>Built-in AI-Tools &amp; Network Analysis Tools</strong>: Tool-specific commands for each discovery phase</p>
<ul>
<li><code>tcpdump</code>: Packet capture &amp; analysis</li>
</ul>
</li>
</ol>
<ul>
<li><p><code>traceroute</code>: Path analysis</p>
</li>
<li><p><code>nslookup/dig</code>: DNS resolution</p>
</li>
<li><p><code>telnet</code>: Port connectivity</p>
</li>
<li><p><code>ping</code>: Basic reachability</p>
</li>
<li><p><code>netstat</code>: Connection analysis</p>
</li>
<li><p><code>ss</code>: Socket statistics</p>
</li>
</ul>
<h2 id="heading-3-mode-testing-amp-3-way-validation-each-phase">3-Mode Testing &amp; 3-Way Validation each Phase</h2>
<h3 id="heading-3-mode-testing">3-Mode Testing</h3>
<ol>
<li><p><strong>Mode 1 - MCP Direct</strong>:</p>
<ul>
<li><p>Execute via pure MCP servers execution</p>
</li>
<li><p>Real-time AWS API calls</p>
</li>
<li><p>JSON/structured output</p>
</li>
</ul>
</li>
<li><p><strong>Mode 2 - Jupyter-Notebook Workflows with</strong> <code>Papermill</code>:</p>
<ul>
<li><p>Pre-built analysis notebooks</p>
</li>
<li><p>Data visualization dashboards templates</p>
</li>
<li><p>Cost optimization dashboards</p>
</li>
<li><p>Security assessment reports</p>
</li>
</ul>
</li>
<li><p><strong>Mode 3 - Native Tools</strong>:</p>
<ul>
<li><p>Native AWS CLI/API calls/commands</p>
</li>
<li><p>Network diagnostic tools</p>
</li>
<li><p>Runbooks for automated discovery with system-level validation</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-3-way-validation">3-Way Validation</h3>
<ol>
<li><p><strong>Forward</strong>: AI Agents → MCP → AWS</p>
</li>
<li><p><strong>Backward</strong>: AWS → MCP → AI Agents</p>
</li>
<li><p><strong>CrossCheck</strong>: Direct AWS CLI/API validation</p>
</li>
</ol>
<details><summary>AWS Configuration</summary><div data-type="detailsContent">AWS_PROFILE &amp; AWS_REGION + Centralised-Networking-Account</div></details>

<h3 id="heading-end-to-end-agents-sdlc-amp-deliverables">End-to-End Agents SDLC &amp; Deliverables</h3>
<ol>
<li><p>Executive Prompt ready AWS-Network-Discovery.md for copy-paste with Agent Orchestration with <code>product-owner</code> + <code>cloud-architect</code> dual leadership model</p>
</li>
<li><p>MCP integration for all AWS services + Network tools command library + <mark>Runbooks</mark>: Cost reduction projections and Security improvement metrics</p>
</li>
<li><p>Jupyter notebooks templates with validation framework with 3-mode/3-way; as well as Business metrics and ROI calculations: cross-validation matrices with accuracy ≥99.5%</p>
</li>
</ol>
<h2 id="heading-multi-account-network-architecture">Multi-Account Network Architecture</h2>
<h3 id="heading-centralised-networking-account">Centralised Networking Account</h3>
<h3 id="heading-application-account">Application Account</h3>
<h2 id="heading-actionable-cost-optimization">Actionable Cost Optimization</h2>
]]></content:encoded></item><item><title><![CDATA[Agile SDLC Workflow for HITL + AI Agents]]></title><description><![CDATA[🎯 This outlines how Agile SDLC with your enterprise team (1 HITL + AI Agents) to build and publish the CloudOps & FinOps runbooks automation system iteratively, safely, and with transparent governance 🧰

https://youtu.be/B75p1-x_DdI
 
1. Team Compo...]]></description><link>https://blog.oceansoft.io/agile-sdlc-workflow-for-hitl-ai-agents</link><guid isPermaLink="true">https://blog.oceansoft.io/agile-sdlc-workflow-for-hitl-ai-agents</guid><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[agile]]></category><category><![CDATA[AWS]]></category><category><![CDATA[runbooks]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Fri, 10 Oct 2025 02:39:53 GMT</pubDate><content:encoded><![CDATA[<blockquote>
<p>🎯 This outlines how <strong>Agile SDLC</strong> with your <strong>enterprise team</strong> (1 HITL + AI Agents) to build and publish the <strong>CloudOps &amp; FinOps runbooks</strong> automation system iteratively, safely, and with transparent governance 🧰</p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/B75p1-x_DdI">https://youtu.be/B75p1-x_DdI</a></div>
<p> </p>
<h2 id="heading-1-team-composition-amp-workflow-setup">1. Team Composition &amp; Workflow Setup</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Role</strong></td><td><strong>Work Focus</strong></td><td><strong>Interaction with Spec / Tasks</strong></td></tr>
</thead>
<tbody>
<tr>
<td>👨‍✈️ <strong>HITL Manager</strong></td><td>Strategic direction, priorities, stakeholder alignment</td><td>Prioritize spec backlog, approve exceptions</td></tr>
<tr>
<td>🤖:AI <strong>Product-Owner</strong></td><td>Value definition, backlog grooming</td><td>Draft spec proposals, manage spec backlog</td></tr>
<tr>
<td>🤖:AI <strong>Cloud-Architect</strong></td><td>High-level design, cross-module alignment</td><td>Produce plan layer, approve architectural spec</td></tr>
<tr>
<td>🤖:AI <strong>DevSec Engineer</strong></td><td>Policy-as-code, security review, risk scoring</td><td>Annotate spec risk, enforce control gates</td></tr>
<tr>
<td>🤖:AI <strong>SRE Automation</strong></td><td>Reliability, drift logic, safe automation</td><td>Validate detection / remediation specs</td></tr>
<tr>
<td>🤖:AI <strong>Python Engineer</strong></td><td>Code implementation, adapters</td><td>Generate module code &amp; tests</td></tr>
<tr>
<td>🤖:AI <strong>QA Specialist</strong></td><td>Test coverage, regression, negative tests</td><td>Write test spec and ensure validation</td></tr>
<tr>
<td>🤖:AI <strong>Data Architect</strong></td><td>Metrics, telemetry, cost modeling</td><td>Define data contracts and instrumentation</td></tr>
<tr>
<td>🤖:AI <strong>Document Engineer</strong></td><td>ADRs, runbooks, PR/FAQ, spec docs</td><td>Produce narratives tied to spec docs</td></tr>
</tbody>
</table>
</div><p>All agents use a <strong>shared workspace</strong> (Git + Spec Kit + JIRA) and celebrate <strong>sprint cadence</strong>.</p>
<hr />
<h2 id="heading-2-spec-driven-workflow">2. Spec-Driven Workflow</h2>
<p>We adopt <strong>Spec-Driven Development</strong> with 3-phase workflow: <strong>Specify</strong> → <strong>Plan</strong> → <strong>Tasks</strong> using with <a target="_blank" href="https://github.com/github/spec-kit">GitHub Spec Kit</a> / <a target="_blank" href="https://github.com/bmad-code-org/BMAD-METHOD">BMAD method</a> 🆓 / <a target="_blank" href="https://kiro.dev/">AWS Kiro</a> 💰.</p>
<ol>
<li><p><strong>Specify</strong>: PO (or HITL) writes the “what / why / acceptance criteria / risk metadata” spec</p>
</li>
<li><p><strong>Plan</strong>: Architect designs architecture, module boundaries, interfaces</p>
</li>
<li><p><strong>Tasks</strong>: Break into small, testable units (covered by AI agents)</p>
</li>
</ol>
<p><strong>Gates</strong> between phases must pass reviews: spec review, architecture review, test planning.</p>
<p>This ensures alignment, reduces rework, and makes assumptions explicit.</p>
<h2 id="heading-3-architecture-amp-patterns-for-cloud-foundation-cost-optimization">3. Architecture &amp; Patterns for Cloud Foundation + Cost Optimization</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759981217251/ad95b49c-6225-428c-b746-79419604bb90.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-mcp-integration-summary">🔄 MCP Integration Summary</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Automation</strong></td><td><strong>Connection</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Agent → <a target="_blank" href="https://github.com/github/spec-kit"><strong>Spec-Kit</strong></a></td><td>AI Agents consume spec files, generate implementation</td></tr>
<tr>
<td>Remediation → <a target="_blank" href="https://pypi.org/project/runbooks/">Runbooks API</a></td><td>All snapshots and actions logged</td></tr>
<tr>
<td><a target="_blank" href="https://awslabs.github.io/mcp/"><strong>AWS MCP Servers</strong></a></td><td>🏗️ Infrastructure 💰 Cost &amp; Operations Monitor: optimize &amp; manage AWS infra and costs</td></tr>
<tr>
<td><a target="_blank" href="https://www.atlassian.com/platform/remote-mcp-server">Atlassian Jira / Confluence</a></td><td>Tickets &amp; exceptions</td></tr>
<tr>
<td>Metrics → <a target="_blank" href="https://vizro.readthedocs.io/projects/vizro-mcp">Vizro Analytics</a></td><td>Compliance trends, drift latency</td></tr>
<tr>
<td>Slack / Teams</td><td>Notifications / alerts</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-okrs-metrics-amp-continuous-improvement">OKRs, Metrics &amp; Continuous Improvement</h3>
<p>Each sprint contributes to OKRs (quarterly). Example OKRs for Runbooks:</p>
<ul>
<li><p><strong>KR1</strong>: Increase inventory coverage to 95% accounts</p>
</li>
<li><p><strong>KR2</strong>: Drift remediation latency P95 ≤ 20 minutes</p>
</li>
<li><p><strong>KR3</strong>: Cost savings via rightsizing ≥ 15%</p>
</li>
<li><p><strong>KR4</strong>: False positives &lt; 5%</p>
</li>
</ul>
<p>Measure velocity, change failure rate, lead time, DORA metrics, module reuse, test coverage.</p>
<hr />
<h2 id="heading-4-publishing-amp-feedback-loop">4. Publishing &amp; Feedback Loop</h2>
<ul>
<li><p>After stable increment, package new modules, update CLI, publish to PyPI</p>
</li>
<li><p>Version bump, release notes, changelog</p>
</li>
<li><p>External teams adopt and provide feedback (bugs, feature requests)</p>
</li>
<li><p>Those feedback items become new spec proposals</p>
</li>
</ul>
<hr />
<h2 id="heading-5-references">5. References</h2>
<ul>
<li><p>📚 <a target="_blank" href="https://www.amazon.com.au/Rewired-McKinsey-Guide-Outcompeting-Digital/dp/1394207115">Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI</a></p>
</li>
<li><p>📚 <a target="_blank" href="https://www.amazon.com.au/AI-Engineering-Building-Applications-Foundation/dp/1098166302">AI Engineering: Building Applications with Foundation Models</a></p>
</li>
<li><p>📚 <a target="_blank" href="https://www.amazon.com.au/AWS-Solutions-Architects-definitive-Architecture/dp/1836641931">AWS for Solutions Architects: Design and scale secure AWS architectures with GenAI strategies and real-world patterns</a></p>
</li>
</ul>
<hr />
]]></content:encoded></item><item><title><![CDATA[Expose Kubernetes Microservices hosted on Private Subnets and On-Premises Networks]]></title><description><![CDATA[💡
This article provides an overview of the AWS Architecture Diagram to deploy and manage a hybrid cloud infrastructure, consisting of microservices and applications, in an automated manner, utilising Amazon EKS and HashiCorp Terraform. 🎯🌤️


Intro...]]></description><link>https://blog.oceansoft.io/expose-kubernetes-microservices-hosted-on-private-subnets-and-on-premises-networks</link><guid isPermaLink="true">https://blog.oceansoft.io/expose-kubernetes-microservices-hosted-on-private-subnets-and-on-premises-networks</guid><category><![CDATA[Microservices]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[EKS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Sat, 08 Jul 2023 20:49:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1688708972526/8092f9dc-ca58-4466-ae2a-36d2bb4d371c.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This article provides an overview of the <strong><mark>AWS Architecture Diagram</mark></strong> to deploy and manage a hybrid cloud infrastructure, consisting of microservices and applications, in an automated manner, utilising <strong>Amazon EKS </strong>and <strong>HashiCorp Terraform</strong>. 🎯🌤️</div>
</div>

<h2 id="heading-introduction">Introduction</h2>
<h3 id="heading-microservices">Microservices</h3>
<p><strong>Microservices</strong> enable applications to scale easily and to develop quickly, thus enabling innovation and speeding up time to market. The two main technologies that enable the development of microservices applications are Containers and Serverless. In recent years, <strong>Kubernetes</strong> (<strong>K8s</strong>) has been widely adopted for automating the deployment of infrastructure, scaling of resources, and managing containerized applications.</p>
<p>When we develop microservices, we need to follow the following best practices <a target="_blank" href="https://www.linkedin.com/posts/alexxubyte_systemdesign-coding-interviewtips-activity-7082021558193418241-nw7Q">[1]</a>:</p>
<ul>
<li><p>Use separate data storage for each microservice</p>
</li>
<li><p>Keep code at a similar level of maturity</p>
</li>
<li><p>Separate build for each microservice</p>
</li>
<li><p>Assign each microservice with a single responsibility</p>
</li>
<li><p>Deploy into containers</p>
</li>
<li><p>Design stateless services</p>
</li>
<li><p>Adopt domain-driven design</p>
</li>
<li><p>Design micro frontend</p>
</li>
<li><p>Orchestrating microservices</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688777761871/8518cb67-0b52-4ea0-85dc-1161961e393f.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-amazon-elastic-kubernetes-service-eks">Amazon Elastic Kubernetes Service (<strong>EKS</strong>)</h3>
<p>Utilize <strong>Amazon EKS</strong> to expose <strong>K8s</strong> <strong>Microservices</strong> hosted on <strong>private subnets</strong> to the Internet and <strong>on-premises</strong> networks.</p>
<h3 id="heading-terraform">Terraform</h3>
<p>Further, <strong>HashiCorp Terraform</strong> Infrastructure as Code is a tool to deploy and operate containerized workloads quickly and efficiently.</p>
<h2 id="heading-virtual-private-cloud-vpc">Virtual Private Cloud (VPC)</h2>
<ul>
<li><p>A highly available architecture that spans at least 2 Availability Zones.</p>
</li>
<li><p>A VPC configured with public and private subnets, according to AWS best practices, to provide you with your own virtual network on AWS.</p>
</li>
<li><p>Routes incoming internet traffic through an Amazon Route 53 public hosted zone.</p>
</li>
<li><p>In the public subnets, managed Network Address Translation (NAT) gateways allow outbound internet access for resources in the private subnet.</p>
</li>
<li><p>In the private subnets, Amazon EKS clusters with Kubernetes Worker Nodes -inside an Auto Scaling Group. Each node is an Amazon Elastic Compute Cloud (<strong>Amazon EC2</strong>) instance or <strong>AWS Fargate</strong>. Each cluster may contain the following:</p>
<ul>
<li><p>Microservices applications and components.</p>
</li>
<li><p>Cert-manager.</p>
</li>
<li><p>An open-source logging and monitoring solution with Grafana, Prometheus.</p>
</li>
<li><p>ExternalDNS, which synchronizes exposed Kubernetes services and ingresses with <strong>Route 53</strong>.</p>
</li>
</ul>
</li>
<li><p>An <strong>Elastic Load Balancer</strong> to distribute traffic across the Kubernetes nodes.</p>
</li>
<li><p>Amazon Simple Storage Service (<strong>Amazon S3</strong>) to store the files.</p>
</li>
<li><p>Amazon Elastic File System (<strong>Amazon EFS</strong>) to provide storage for Grafana, Prometheus.</p>
</li>
<li><p>Amazon Relational Database Service (<strong>Amazon RDS</strong>) for PostgreSQL to store application data.</p>
</li>
<li><p>Amazon Elastic Container Registry (<strong>Amazon ECR</strong>) to provide a private registry.</p>
</li>
<li><p>AWS Key Management Service (<strong>AWS KMS</strong>) to provide an encryption key for Amazon RDS, Amazon EFS, and <strong>AWS Secrets Manager</strong>.</p>
</li>
<li><p><strong>AWS Secrets Manager</strong> to replace hardcoded credentials, including passwords, with an API call.</p>
</li>
<li><p>🚦 The traffic is sent/receive to/from the on-premises network over the <strong>Virtual Private Network</strong> (VPN) or <strong>AWS Direct Connect</strong> connection.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688708107979/1786ea2b-7895-48b0-bdec-74b9fb8ad709.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-expose-microservices-in-a-hybrid-scenario">Expose Microservices in a Hybrid Scenario</h2>
<h3 id="heading-1-inbound-external-andamp-eks-public-load-balancer"><strong>1. Inbound External &amp; EKS Public Load Balancer</strong></h3>
<ul>
<li><p><strong>Amazon Route 53</strong> resolves incoming requests to the <strong><em>public</em></strong> <strong>Elastic Load Balancer (ELB</strong>) deployed by the <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/"><strong>AWS Load Balancer Controller</strong></a><strong>.</strong></p>
</li>
<li><p>🚦The <em>AWS LB controller</em> satisfies <a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/service/">K8s services</a> with Network Load Balancers (<strong>NLB</strong>s) and <a target="_blank" href="https://www.eksworkshop.com/beginner/130_exposing-service/ingress/">Kubernetes ingresses</a> with Application Load Balancers (<strong>ALB</strong>s). You can also manage ingresses by implementing other ingress controllers like the <a target="_blank" href="https://kubernetes.github.io/ingress-nginx/deploy/">NGINX ingress controller</a>.</p>
</li>
<li><p>🚦The <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/"><strong>EKS</strong>-related ELBs</a> forward traffic to applications. You can choose between the two modes <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/how-it-works/">[2]</a>:</p>
<ul>
<li><p><strong>Instance mode</strong>: The traffic is sent to a worker node, then the service redirects traffic to the Pod.</p>
</li>
<li><p><strong>IP mode</strong>: The traffic is directed to the IP of the Pod directly.</p>
</li>
</ul>
</li>
<li><p>🚦If we’re using <strong>AWS Fargate</strong> for <strong>Amazon EKS</strong>, we will not have Worker Nodes but only the pod ENIs in the private subnets. You can only use <strong><em>ELBs with IP mode</em></strong> with <strong>AWS Fargate</strong> pods.</p>
</li>
</ul>
<h3 id="heading-2-inbound-external-andamp-eks-private-load-balancer">2. Inbound External &amp; EKS Private Load Balancer</h3>
<ul>
<li><strong>Amazon Route 53</strong> resolves incoming requests to the <strong><em>Private</em></strong> <strong>ELB</strong> deployed by the <strong>AWS Load Balancer Controller.</strong></li>
</ul>
<h3 id="heading-3-outbound-external"><strong>3. Outbound External</strong></h3>
<ul>
<li><p>When the pod in private subnets initiates an outbound request to the internet, the private route table forwards the traffic to the <strong>NAT Gateway</strong> (NGW).</p>
</li>
<li><p>The public route table forwards the traffic from the NGW to the <strong>Internet Gateway</strong> (IGW).</p>
</li>
</ul>
<h3 id="heading-4-outbound-internal"><strong>4. Outbound Internal</strong></h3>
<ul>
<li><p>The pod in private subnets initiates an outbound request to the on-premises network. The private route table forwards the traffic to the <strong>Virtual Private Gateway</strong> (VGW).</p>
</li>
<li><p>🚦 You can also enable private access for your <strong>Amazon EKS</strong> cluster’s Kubernetes API server endpoint and limit, or completely disable, public access from the internet <a target="_blank" href="https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/">[3]</a>.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1688708949213/6f85d85f-b988-48b9-afaa-c30b760684d5.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>🚦<a target="_blank" href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/"><strong>Deal with Pod IP Exhaustion</strong></a><strong>:</strong><br />  <mark>Increase the IP addresses available to pods by adding dedicated subnets from the 100.64.0.0/10 and 198.19.0.0/16 ranges.</mark></p>
<ul>
<li><p>By <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html">adding secondary CIDR blocks to a VPC</a> from the <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc6598">RFC 6598</a> address space (in the example 100.64.0.0/16), in conjunction with the <a target="_blank" href="https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html">CNI Custom Networking</a> feature, it is possible for pods to no longer consume any <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc1918">RFC 1918</a> IP addresses in a VPC (in the example, pods are in subnets 100.64.0.0/19 and 100.64.32.0/19). Check out <a target="_blank" href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-multiple-cidr-ranges/">How do I use multiple CIDR ranges with Amazon EKS?</a></p>
</li>
<li><p>💡 Moving to IPV6 also solves pod IP exhaustion, because you don’t need to work around IPv4 limits.</p>
</li>
</ul>
</li>
<li><p>🚦Check out <a target="_blank" href="https://aws.amazon.com/blogs/containers/eks-vpc-routable-ip-address-conservation/">EKS VPC routable IP address conservation patterns in a hybrid network</a> for Multi-Account settings to leverage the <a target="_blank" href="https://aws.amazon.com/transit-gateway/">AWS Transit Gateway</a> to scale this pattern across an enterprise to include multiple EKS clusters and an on-premises data-center.</p>
</li>
</ul>
<h3 id="heading-references">References</h3>
<ul>
<li><p>[1] <a target="_blank" href="https://www.linkedin.com/posts/alexxubyte_systemdesign-coding-interviewtips-activity-7082021558193418241-nw7Q">https://www.linkedin.com/posts/alexxubyte_systemdesign-coding-interviewtips-activity-7082021558193418241-nw7Q</a></p>
</li>
<li><p>[2] <a target="_blank" href="https://www.linkedin.com/posts/ankit-jodhani_10weeksofcloudops-10weeksofcloudops-aws-activity-7077706068096692225-iQyy?utm_source=share&amp;utm_medium=member_desktop">https://www.linkedin.com/posts/ankit-jodhani_10weeksofcloudops-10weeksofcloudops-aws-activity-7077706068096692225-iQyy</a></p>
</li>
<li><p>[3] <a target="_blank" href="https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/expose-microservices-using-eks-ra.pdf?did=wp_card&amp;trk=wp_card">https://d1.awsstatic.com/architecture-diagrams/ArchitectureDiagrams/expose-microservices-using-eks-ra.pdf</a></p>
</li>
<li><p>[4] <a target="_blank" href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/how-it-works/">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/how-it-works/</a></p>
</li>
<li><p>[5] <a target="_blank" href="https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/">https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/</a></p>
</li>
<li><p>[6] <a target="_blank" href="https://terraform.job4u.io/en/private-eks.html">https://terraform.job4u.io/en/private-eks.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[⚙️ Automated Development Environment in the Cloud ⛅]]></title><description><![CDATA[🎯 Deliverables:

✅ Youtube: https://youtu.be/UvMYMObzXb4
✅ Github: https://github.com/OceanSoftIO/Digital-Commerce

1. Gitpod Cloud Development Environment
Modern engineering teams are embracing cloud technologies and automating whenever possible, i...]]></description><link>https://blog.oceansoft.io/cloud-development-environment</link><guid isPermaLink="true">https://blog.oceansoft.io/cloud-development-environment</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Gitpod]]></category><category><![CDATA[ecommerce]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Mon, 07 Nov 2022 12:35:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1667822303883/kQB9McMEn.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🎯 <strong>Deliverables</strong>:</p>
<ul>
<li>✅ Youtube: https://youtu.be/UvMYMObzXb4</li>
<li>✅ Github: https://github.com/OceanSoftIO/Digital-Commerce</li>
</ul>
<h2 id="heading-1-gitpod-cloud-development-environment">1. <code>Gitpod</code> Cloud Development Environment</h2>
<p>Modern engineering teams are embracing cloud technologies and automating whenever possible, including infrastructure, CI/CD build pipelines, linting/formatting, and even writing code, to avoid costly errors and focus on product and customer value creation.</p>
<p>We will demonstrate how to use <a target="_blank" href="https://www.gitpod.io/">Gitpod</a> to spin up an automated development environment in seconds so that you are always ready to code. After completing the guided tutorial below, you will be able to build and deploy open-source 🆓 <a target="_blank" href="https://github.com/OceanSoftIO/Digital-Commerce">Shopify-like Digital Commerce</a> 💰 in minutes.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=UvMYMObzXb4">https://www.youtube.com/watch?v=UvMYMObzXb4</a></div>
<h2 id="heading-2-cloud-development-environment-in-1-click">2. Cloud Development Environment in 1-Click</h2>
<p>You can quickly set up a complete development environment in your browser with only 1-click and begin coding immediately.</p>
<ul>
<li>Click the button below to start a new cloud development environment using Gitpod: <strong>https://gitpod.io/#</strong>https://github.com/OceanSoftIO/Digital-Commerce</li>
</ul>
<p><a target="_blank" href="https://gitpod.io/#https://github.com/OceanSoftIO/Digital-Commerce"><img src="https://gitpod.io/button/open-in-gitpod.svg" alt="Open in Gitpod" /></a></p>
<ul>
<li><p>You will need to authorize your GitHub account before you can use Gitpod.</p>
</li>
<li><p>Upon clicking on Gitpod, a workspace will be opened in the browser that contains:</p>
<ul>
<li>VS Code for editing code.</li>
<li>An embedded browser window in which the application is running.</li>
<li>Two terminal sessions: the left terminal can be used for entering commands, and the right terminal runs the eCommerce-Backend testing.</li>
</ul>
</li>
</ul>
<h3 id="heading-21-generate-your-gitpod-config-file">2.1. Generate Your Gitpod Config File:</h3>
<pre><code class="lang-sh">gp init
</code></pre>
<h3 id="heading-22-edit-gitpodyml">2.2. Edit <code>.gitpod.yml</code></h3>
<pre><code class="lang-yml"><span class="hljs-comment">## List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/</span>
<span class="hljs-attr">tasks:</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-number">1.1</span><span class="hljs-string">.</span> <span class="hljs-string">backend</span>
    <span class="hljs-comment">## working directory as `/ecommerce/backend`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      npm install @medusajs/medusa-cli@latest -g
      cd backend
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
      npm install
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      npm run dev
      # gp sync-done finished
</span>    <span class="hljs-attr">openMode:</span> <span class="hljs-string">split-left</span>

<span class="hljs-comment">## List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/</span>
<span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">9000</span>
    <span class="hljs-attr">onOpen:</span> <span class="hljs-string">ignore</span>
</code></pre>
<h3 id="heading-23-test-ecommerce-backend">2.3. Test ecommerce-backend</h3>
<pre><code class="lang-yml"><span class="hljs-comment">## List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/</span>
<span class="hljs-attr">tasks:</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-string">backend</span>
    <span class="hljs-comment">## working directory as `/ecommerce/backend`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      npm install @medusajs/medusa-cli@latest -g
      cd backend
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
      npm install
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      npm run dev
</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-number">2</span><span class="hljs-string">.</span> <span class="hljs-string">backend</span> <span class="hljs-string">(test)</span>
    <span class="hljs-comment">## working directory as `/ecommerce/backend`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      cd backend
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
      # echo "curl -X GET localhost:9000/store/products | python -m json.tool"
      echo "npm run seed"
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      echo "curl -X GET localhost:9000/store/products | python -m json.tool"
</span>
<span class="hljs-comment">## List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/</span>
<span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">9000</span>
    <span class="hljs-attr">onOpen:</span> <span class="hljs-string">ignore</span>
  <span class="hljs-comment"># - port: 8000</span>
  <span class="hljs-comment">#   onOpen: open-browser</span>
  <span class="hljs-comment"># - port: 7000</span>
  <span class="hljs-comment">#   onOpen: open-preview</span>
</code></pre>
<pre><code class="lang-sh"><span class="hljs-built_in">cd</span> backend

<span class="hljs-comment">##</span>
npm run seed

<span class="hljs-comment">##</span>
curl -X GET localhost:9000/store/products | python -m json.tool
</code></pre>
<h2 id="heading-3-docker-compose-on-gitpod">3. <code>docker compose</code> on Gitpod</h2>
<blockquote>
<p>This <a target="_blank" href="www.gitpod.io">Gitpod</a> Docker Compose provides you with pre-built, ephemeral development environments in the cloud ⛅</p>
</blockquote>
<h3 id="heading-31-update-project-config-in-backendmedusa-configjshttpsgithubcomoceansoftiodigital-commerceblobmainbackendmedusa-configjs">3.1. Update project config in <a target="_blank" href="https://github.com/OceanSoftIO/Digital-Commerce/blob/main/backend/medusa-config.js">backend/<code>medusa-config.js</code></a>:</h3>
<pre><code class="lang-javascript"><span class="hljs-built_in">module</span>.exports = {
  <span class="hljs-attr">projectConfig</span>: {

    <span class="hljs-comment">// /** Option1 - SQLite (default): Development-like Environment */</span>
    <span class="hljs-comment">// database_database: "./ecommerce.sql",</span>
    <span class="hljs-comment">// database_type: "sqlite",</span>

    <span class="hljs-comment">/** Option2 - PostgresQL: For more production-like environment */</span>
    <span class="hljs-attr">redis_url</span>: REDIS_URL,
    <span class="hljs-attr">database_url</span>: DATABASE_URL, <span class="hljs-comment">//postgres connectionstring</span>
    <span class="hljs-attr">database_type</span>: <span class="hljs-string">"postgres"</span>,

    <span class="hljs-attr">store_cors</span>: STORE_CORS,
    <span class="hljs-attr">admin_cors</span>: ADMIN_CORS,
  },
  plugins,
};
</code></pre>
<h3 id="heading-32-docker-compose-task">3.2. <code>docker compose</code> task</h3>
<pre><code class="lang-yml"><span class="hljs-comment">## List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/</span>
<span class="hljs-attr">tasks:</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-number">2.1</span><span class="hljs-string">.</span> <span class="hljs-string">backend</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">docker</span> <span class="hljs-string">compose</span>
    <span class="hljs-comment">## working directory as `/ecommerce/backend`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      npm install @medusajs/medusa-cli@latest -g
      cd backend
</span>    <span class="hljs-comment"># init: |</span>
    <span class="hljs-comment">#   docker compose pull</span>
    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      docker compose up --build
      # gp sync-done finished
</span>    <span class="hljs-attr">openMode:</span> <span class="hljs-string">split-left</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-number">2.2</span><span class="hljs-string">.</span> <span class="hljs-string">backend</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">docker</span> <span class="hljs-string">compose</span> <span class="hljs-string">(test)</span>
    <span class="hljs-comment">## working directory as `/ecommerce/backend`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      cd backend
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
      # curl -X GET localhost:9000/store/products | python -m json.tool
      echo "docker image ls"
      echo "docker container ls"
      echo "docker exec ecommerce-backend medusa seed -f ./data/seed.json"
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      # gp sync-await finished &amp;&amp; \
      echo "curl -X GET localhost:9000/store/products | python -m json.tool"
</span>    <span class="hljs-attr">openMode:</span> <span class="hljs-string">split-right</span>

<span class="hljs-comment">## List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/</span>
<span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">9000</span>
    <span class="hljs-attr">onOpen:</span> <span class="hljs-string">ignore</span>
  <span class="hljs-comment"># - port: 8000</span>
  <span class="hljs-comment">#   onOpen: open-browser</span>
  <span class="hljs-comment"># - port: 7000</span>
  <span class="hljs-comment">#   onOpen: open-preview</span>
</code></pre>
<h3 id="heading-32-to-get-started-with-docker-compose-on-gitpod-click-on-the-open-in-gitpod-button">3.2. To get started with Docker Compose on Gitpod, click on the "Open in Gitpod" button.</h3>
<h2 id="heading-4-lab-1-express-redis">4. Lab #1: <code>express-redis</code></h2>
<pre><code>## List the start up tasks. Learn more https:<span class="hljs-comment">//www.gitpod.io/docs/config-start-tasks/</span>
tasks:

  - name: Start Redis Stack
    ## working directory <span class="hljs-keyword">as</span> <span class="hljs-string">`/README/lab1.express-redis`</span>
    <span class="hljs-attr">before</span>: |
      cd README/lab1.express-redis
    <span class="hljs-attr">init</span>: |
     docker compose pull
    <span class="hljs-attr">command</span>: |
     alias redis-cli=<span class="hljs-string">"docker exec -it redis-stack redis-cli"</span> 
     echo <span class="hljs-string">"Use redis-cli to interact with Redis here."</span>
     docker compose up -d
     gp sync-done finished
    <span class="hljs-attr">openMode</span>: split-left

  - name: Start Express Application
    ## working directory <span class="hljs-keyword">as</span> <span class="hljs-string">`/README/lab1.express-redis`</span>
    <span class="hljs-attr">before</span>: |
      cd README/lab1.express-redis/backend
    <span class="hljs-attr">init</span>: |
      npm install
    <span class="hljs-attr">command</span>: |
      gp sync-<span class="hljs-keyword">await</span> finished &amp;&amp; \
      npm run dev
    <span class="hljs-attr">openMode</span>: split-right

  - port: <span class="hljs-number">9999</span>
    <span class="hljs-attr">onOpen</span>: open-preview
  - port: <span class="hljs-number">6379</span>
    <span class="hljs-attr">onOpen</span>: ignore
</code></pre><ul>
<li><p>Upon clicking on Gitpod, a workspace will be opened in the browser that contains:</p>
<ul>
<li>VS Code for editing code.</li>
<li>An embedded browser window in which the application is running.</li>
<li>Two terminal sessions: the left terminal can be used for entering commands, and the right terminal runs the application using nodemon. Nodemon restarts the application when you save code changes.</li>
</ul>
</li>
</ul>
<h2 id="heading-5-lab-2-elasticsearch-logstash-kibana">5. Lab #2: <code>elasticsearch-logstash-kibana</code></h2>
<pre><code class="lang-yml"><span class="hljs-comment">## List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/</span>
<span class="hljs-attr">tasks:</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Start</span> <span class="hljs-string">ELK</span> <span class="hljs-string">Elasticsearch</span> <span class="hljs-string">Logstash</span> <span class="hljs-string">Kibana</span>
    <span class="hljs-comment">## working directory as `/README/lab2.elasticsearch-logstash-kibana`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      cd README/lab2.elasticsearch-logstash-kibana
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
     docker compose pull
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
     docker compose up -d
     gp sync-done finished
</span>    <span class="hljs-attr">openMode:</span> <span class="hljs-string">split-left</span>

  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">ELK</span>
    <span class="hljs-comment">## working directory as `/README/lab2.elasticsearch-logstash-kibana`</span>
    <span class="hljs-attr">before:</span> <span class="hljs-string">|
      cd README/lab2.elasticsearch-logstash-kibana
</span>    <span class="hljs-attr">init:</span> <span class="hljs-string">|
      echo "[Test ELK] init ..."
</span>    <span class="hljs-attr">command:</span> <span class="hljs-string">|
      gp sync-await finished &amp;&amp; \
      echo "[Test ELK] command ..."
      echo "PORTS &gt;&gt; 5601 &gt;&gt; Open Preview"
</span>    <span class="hljs-attr">openMode:</span> <span class="hljs-string">split-right</span>
</code></pre>
<h2 id="heading-6-next-steps">6. Next Steps</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667823809581/jficP0AkF.gif" alt="Architecture.gif" /></p>
<ul>
<li>💎 Modernizing Full-Stack Applications with ⚡ Serverless Containers 🐳 and Infrastructure as Code ⛅</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[[💪 FullStack Serverless ⚡ Frontend  💎] Deploying Single-Page Application (SPA) with AWS CDK V2 🆓]]></title><description><![CDATA[🎯 As the frontend for the Serverless Application on AWS, this React application has been bootstrapped with Create React App.
💎 The AWS CDK Construct cdk-spa for deploying Single-Page Application (Angular/React/Vue) to AWS S3 behind CloudFront CDN, ...]]></description><link>https://blog.oceansoft.io/cdk-spa</link><guid isPermaLink="true">https://blog.oceansoft.io/cdk-spa</guid><category><![CDATA[aws-cdk]]></category><category><![CDATA[React]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[serverless]]></category><category><![CDATA[spa]]></category><dc:creator><![CDATA[Thanh Nguyen]]></dc:creator><pubDate>Fri, 21 Oct 2022 04:27:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1666155034982/d0wF3wdnP.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>🎯 As the <code>frontend</code> for the Serverless Application on AWS, this React application has been bootstrapped with <a target="_blank" href="https://github.com/facebook/create-react-app">Create React App</a>.</p>
<p>💎 The AWS CDK Construct <code>cdk-spa</code> for deploying Single-Page Application (Angular/React/Vue) to <strong>AWS S3</strong> behind <strong>CloudFront CDN</strong>, <strong>Route53 DNS</strong>, <strong>Certificate Manager</strong> in minutes.</p>
<p>💎 The CDK Construct <code>cdk-cognito</code> for deploying Amazon Cognito to provide authentication, authorization, and user management for web &amp; mobile applications.</p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=A3miMT1CKYI">https://www.youtube.com/watch?v=A3miMT1CKYI</a></div>
<h2 id="heading-1-create-a-serverless-application-using-the-aws-cloud-development-kit-cdk">1. Create a Serverless Application using the AWS Cloud Development Kit (CDK) ⚙️</h2>
<p>This is the CDK source code for deploying a <strong>Serverless Application ⚡</strong> on AWS, which includes the infrastructure for the ReactJS Frontend, NodeJS Backend, Cognito authentication, authorization, and user management, CodeBuild / CodePipeline DevOps CI/CD, and IAM / KMS / CloudWatch Operation.</p>
<ul>
<li><p><strong>Step 1.1.</strong> Create an AWS CDK app <code>cdk</code></p>
<pre><code>CDK_APP_ID=cdk
mkdir $CDK_APP_ID &amp;&amp; cd $CDK_APP_ID

cdk init app --language typescript
</code></pre></li>
<li><p><strong>Step 1.2.</strong> Installing <a target="_blank" href="https://www.npmjs.com/package/cdk-spa">cdk-spa</a> for deploying <code>frontend</code></p>
<p><code>npm install --save cdk-spa</code></p>
<ul>
<li><code>cdk-spa</code> Option 1. Basic setup needed for a non-SSL, non cached <strong>S3</strong> website.</li>
<li><code>cdk-spa</code> Option 2. S3 deployment will be created, which is fronted by a <strong>Cloudfront Distribution</strong>.</li>
<li><code>cdk-spa</code> Option 3. The deployment of <strong>S3, Cloudfront Distribution, ACM SSL certificates, and Route53 hosted zones</strong>.</li>
</ul>
<blockquote>
<p>💎 This <strong>CDK TypeScript Construct Library</strong> <code>cdk-spa</code> includes a construct <code>CdkSpa</code> and an interface <code>CdkSpaProps</code> to make deploying a <strong>Single Page Application (SPA)</strong> Website (<a target="_blank" href="https://reactjs.org/docs/create-a-new-react-app.html">React.js</a> / <a target="_blank" href="https://vuejs.org/">Vue.js</a> / <a target="_blank" href="https://angular.io/">Angular</a>) to <strong>AWS S3</strong> behind <strong>CloudFront CDN</strong>, <strong>Route53 DNS</strong>, <strong>Certificate Manager SSL</strong> as easy as 5 lines of code.</p>
</blockquote>
</li>
<li><p><strong>Step 1.3.</strong> Usage of CDK constructs <code>Serverless/cdk/lib/frontend-stack.ts</code></p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { StackProps, Stack } <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib'</span>;
<span class="hljs-keyword">import</span> { Construct } <span class="hljs-keyword">from</span> <span class="hljs-string">'constructs'</span>;
<span class="hljs-keyword">import</span> { CdkSpa } <span class="hljs-keyword">from</span> <span class="hljs-string">'cdk-spa'</span>;

<span class="hljs-keyword">interface</span> Props <span class="hljs-keyword">extends</span> StackProps {
  contentBucket: <span class="hljs-built_in">string</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> FrontendStack <span class="hljs-keyword">extends</span> Stack {
  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: Props</span>) {
    <span class="hljs-built_in">super</span>(scope, id, props);

    <span class="hljs-comment">/**
     * 1. Encrypted S3 Bucket
     * 2. Deploying a SPA-Website to AWS S3 behind CloudFront CDN
     * 3. Auto Deploy to/from Hosted Zone Name
     */</span>
    <span class="hljs-keyword">new</span> CdkSpa(<span class="hljs-built_in">this</span>, <span class="hljs-string">'CDK-SPA Website behind Cloudfront CDN'</span>, {
      bucketName: props.contentBucket,
      encryptBucket: <span class="hljs-literal">true</span>,
      <span class="hljs-comment">// ipFilter: true,</span>
      <span class="hljs-comment">// ipList: ['1.1.1.1']</span>
    })
      <span class="hljs-comment">/* Option 1. Basic setup needed for a non-SSL, non vanity url, non cached S3 website. */</span>
      .createSiteS3({
      <span class="hljs-comment">// /* Option 2. S3 deployment will be created, which is fronted by a Cloudfront Distribution. */</span>
      <span class="hljs-comment">// .createSiteWithCloudfront({</span>
      <span class="hljs-comment">// /* Option 3. The deployment of S3, Cloudfront Distribution, ACM SSL certificates, and Route53 hosted zones. */</span>
      <span class="hljs-comment">// .createSiteFromHostedZone({</span>
      <span class="hljs-comment">//   zoneName: 'serverless.aws.oceansoft.io',</span>
        indexDoc: <span class="hljs-string">'index.html'</span>,
        websiteFolder: <span class="hljs-string">'../frontend'</span>
      });

  }
}
</code></pre>
</li>
<li><p><strong>Step 1.4.</strong> Usage of CDK Application <code>Serverless/cdk/bin/cdk.ts</code></p>
<pre><code class="lang-typescript"><span class="hljs-meta">#!/usr/bin/env node</span>
<span class="hljs-keyword">import</span> <span class="hljs-string">'source-map-support/register'</span>;
<span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> cdk <span class="hljs-keyword">from</span> <span class="hljs-string">'aws-cdk-lib'</span>;

<span class="hljs-keyword">import</span> { FrontendStack } <span class="hljs-keyword">from</span> <span class="hljs-string">"../lib/frontend-stack"</span>;

<span class="hljs-keyword">const</span> APP_ID = <span class="hljs-string">"Serverless"</span>;

<span class="hljs-keyword">const</span> account = process.env.AWS_ACCOUNT;
<span class="hljs-keyword">const</span> region = process.env.AWS_REGION;
<span class="hljs-keyword">const</span> contentBucketName = <span class="hljs-string">`<span class="hljs-subst">${APP_ID}</span>-<span class="hljs-subst">${account}</span>-<span class="hljs-subst">${region}</span>-content`</span>.toLowerCase();

<span class="hljs-keyword">const</span> app = <span class="hljs-keyword">new</span> cdk.App();

<span class="hljs-comment">/**
 * 1. AuthStack
 * 2. BackendStack
 * 3. FrontendStack
 * 4. OpsStack
 * 5. DashboardStack
 */</span>

<span class="hljs-comment">/* 3. FrontendStack */</span>
<span class="hljs-keyword">const</span> frontend = <span class="hljs-keyword">new</span> FrontendStack(app, APP_ID.concat(<span class="hljs-string">"-Frontend"</span>), {
  contentBucket: contentBucketName,
  <span class="hljs-comment">// env: { account: account, region: region },</span>
});

cdk.Tags.of(frontend).add(<span class="hljs-string">"APP_ID"</span>, APP_ID);
</code></pre>
</li>
<li><p><strong>Step 1.5.</strong> CDK Deployment <code>deploy.sh</code></p>
<pre><code class="lang-sh"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Boostrap the AWS Account/Region you plan to deploy to ..."</span>
cdk bootstrap aws://<span class="hljs-variable">${AWS_ACCOUNT}</span>/<span class="hljs-variable">${AWS_REGION}</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Installing &amp; building ..."</span>
npm install
npm run build

<span class="hljs-built_in">echo</span> <span class="hljs-string">"deploy this stack to your AWS account/region"</span>
cdk deploy --all --require-approval never
</code></pre>
</li>
</ul>
<hr />
<ul>
<li><p>⚠️ CDK v1 👇</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=3IC1edBb2BA">https://www.youtube.com/watch?v=3IC1edBb2BA</a></div>
</li>
<li><p>✅ Source Code: https://github.com/OceanSoftIO/Serverless/cdk</p>
</li>
</ul>
<hr />
<h2 id="heading-2-deploying-a-cdk-application-using-aws-cloudshell">2. Deploying a CDK Application using AWS CloudShell 🆓</h2>
<blockquote>
<p>🆓 <strong>CloudShell</strong> provides you with a browser-based shell to run scripts and commands. It includes 1 GB of persistent storage per Region at <strong>no extra cost</strong> to you. </p>
</blockquote>
<pre><code>git clone https:<span class="hljs-comment">//github.com/OceanSoftIO/Serverless</span>

cd Serverless/cdk
./deploy.sh
</code></pre><h2 id="heading-3-create-react-application-frontend">3. Create React Application <code>frontend</code></h2>
<ul>
<li><p>✅ Frontend Tech Stack: ✅ React.js || ☑️ Next.js || ☑️ Angular || ☑️ Vue</p>
</li>
<li><p>To boot up Create React App, use the following command-line:</p>
<pre><code>npx create-react-app frontend

cd frontend
npm start
</code></pre></li>
<li><p>☑️ Running Tests with the React Testing Library</p>
<pre><code>npm run test
</code></pre></li>
<li><p>☑️ Changing the App’s MetaData || Images || Other Types of Assets</p>
</li>
</ul>
<ul>
<li><p>☑️ Installing Dependencies</p>
<pre><code>npm install axios
</code></pre></li>
<li><p>☑️ Importing components</p>
</li>
</ul>
<ul>
<li>☑️ Styling React App with CSS</li>
</ul>
<ul>
<li><p>☑️ Building and Publishing the React App</p>
<pre><code>npm run build
</code></pre></li>
</ul>
<h2 id="heading-4-deploying-amazon-cognito-cdk-cognito">4. Deploying Amazon Cognito <code>cdk-cognito</code></h2>
<blockquote>
<p>💎 The CDK Construct cdk-cognito for deploying Amazon Cognito to provide authentication, authorization, and user management for web &amp; mobile applications.</p>
</blockquote>
<h2 id="heading-5-when-andamp-why-andamp-should-developers-consider-micro-frontends">5. When &amp; Why &amp; Should Developers consider Micro-frontends</h2>
<p>📚 <a target="_blank" href="https://increment.com/frontend/micro-frontends-in-context">Micro-Frontends in context</a></p>
<h2 id="heading-6-server-side-rendering-micro-frontend">6. Server-Side Rendering Micro-Frontend</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666093497702/E6J3lw7SH.png" alt="Micro-Frontend" /></p>
<ul>
<li><p>Micro-frontends are a very effective way to embrace distributed systems on the front end using a Serverless approach. As every Serverless Micro-Frontend returns an HTML fragment (HTML-on-the-wire), a UI composer stitches together these independent components, creating a seamless user experience.</p>
<ul>
<li><p>This architecture starts with Amazon CloudFront that has two origins: an Amazon Simple Storage Service (Amazon S3) bucket, and a public Application Load Balancer.</p>
</li>
<li><p>The Amazon S3 bucket contains all the static files to be served for the browser, such as common micro-frontend dependencies, images, or CSS files. Additionally, it contains the templates required by the UI composer for placing each micro-frontend on an HTML page.</p>
</li>
<li><p>The UI composer is an AWS Fargate cluster that combines different micro-frontends into one and serves the results to the browser in real-time, streamlining the response to improve the performance of web applications. Using an Amazon ElastiCache cluster can increase performance even further by caching some micro-frontend output or the entire page.</p>
</li>
<li><p>Use AWS Systems Manager Parameter Store to collect all the micro-services endpoints. They can be HTTP endpoints, or Amazon Resource Names (ARNs) of a specific service such as AWS Lambda or AWS Step Functions. By decoupling, you maintain independence between the teams working on the application.</p>
</li>
<li><p>The serverless micro-frontend is composed of AWS Lambda and Amazon DynamoDB for storing the data that will be rendered. The output is an HTML fragment ready to be embedded in the template composed by the UI composer.</p>
</li>
<li><p>Utilizing Amazon API Gateway, which validates tokens or API keys, ensures that only your application can access those endpoints if you work with third parties in the same application.</p>
</li>
<li><p>Step Functions Express provides a low-code solution for creating micro-frontends. Step Functions' integration with over 200 services allows you, for example, to retrieve data natively from Amazon DynamoDB and delegate only the rendering of that data to an AWS Lambda function.</p>
</li>
</ul>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[⛅ Architecture Patterns for building Serverless Applications ⚡]]></title><description><![CDATA[🎯 Modernizing with Serverless-First Approach
🎯 Architecture Patterns for building Serverless-App.
Benefits of Serverless Computing (Lambda) ⚡

No infrastructure provisioning, no management
Automatic scaling
Pay-for-use
Highly available and secure

...]]></description><link>https://blog.oceansoft.io/architecture-patterns-for-building-serverless-applications</link><guid isPermaLink="true">https://blog.oceansoft.io/architecture-patterns-for-building-serverless-applications</guid><category><![CDATA[serverless]]></category><category><![CDATA[architecture]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Sat, 15 Oct 2022 03:30:45 GMT</pubDate><content:encoded><![CDATA[<p>🎯 Modernizing with Serverless-First Approach</p>
<p>🎯 Architecture Patterns for building Serverless-App.</p>
<h2 id="heading-benefits-of-serverless-computing-lambda">Benefits of Serverless Computing (Lambda) ⚡</h2>
<ul>
<li>No infrastructure provisioning, no management</li>
<li>Automatic scaling</li>
<li>Pay-for-use</li>
<li>Highly available and secure</li>
</ul>
<h2 id="heading-pattern-1-serverless-web-applications-serverless-backend">Pattern 1. Serverless Web Applications - Serverless Backend</h2>
<blockquote>
<p>Use-case 1: Performance Dashboard Solution</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665741888843/cH3bFybVq.png" alt="Figure 2: Serverless Backend" /></p>
<ul>
<li><p><strong>Amazon API-Gateway</strong></p>
<ul>
<li>A managed API platform that is compatible with OpenAPI</li>
<li>From edge to private APIs (Edge, regional, access to VPC and private APIs) + From API to any endpoint (inside or outside of AWS Cloud).</li>
<li>Monitored &amp; traced via CloudWatch and X-Ray.</li>
<li>Super easy Lambda Integration.</li>
<li>A wide range of features - models, cache, and usage plans</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665741931389/QgmsLRz4l.png" alt="Figure 2. API Gateway - Serving Dynamic Content" /></p>
<h2 id="heading-pattern-2-serverless-web-applications-graphql-serverless-backend">Pattern 2. Serverless Web Applications - GraphQL Serverless Backend</h2>
<blockquote>
<p>Use-case 2: <a target="_blank" href="https://solution.job4u.io/Digital-Platform/WhatsApp-like-SmartChat/">SmartChat WhatsApp-like Real-Time &amp; Offline Messaging Web-App</a></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665741993802/s7c1cQNZl.png" alt="Figure 3. GraphQL Serverless Backend" /></p>
<h2 id="heading-managing-costs-for-serverless-workloads">Managing Costs for Serverless Workloads</h2>
<ul>
<li>[x] <a target="_blank" href="https://github.com/alexcasalboni/aws-lambda-power-tuning"><strong>Lambda Power Tuning</strong></a>: Utilize Lambda Power Tuning to decide how much memory to allocate to lambda functions to achieve the right performance and cost balance. </li>
<li>[x] <strong>Compute Optimizer</strong> is a passive approach that uses machine learning algorithms to analyze the function executions and recommend changes to enhance performance and reduce costs.  </li>
<li>[ ] Lambda - Provisioned concurrency</li>
<li>[ ] Lambda - Log tuning</li>
<li>[ ] Lambda – AVX2</li>
</ul>
<h2 id="heading-performance-dashboard-path-to-modernization">[Performance Dashboard] Path to Modernization</h2>
<ul>
<li><p>[x] Step 1. Strategy</p>
<ul>
<li>[ ] Re-platform</li>
<li>[ ] Refactor</li>
<li>[x] Build New</li>
</ul>
</li>
<li><p>[x] Step 2. Pilot</p>
<ul>
<li>[ ] Move to manage</li>
<li>[x] Build new with serverless</li>
</ul>
</li>
<li><p>[x] Step 3. Define</p>
<ul>
<li>[x] Development practices</li>
<li>[x] Operational expertise</li>
<li>[x] Deployment best practices</li>
<li>[x] Cost management &amp; governance</li>
</ul>
</li>
<li><p>[ ] Step 4. Optimize</p>
<ul>
<li>[x] Decrease build &amp; deployment time</li>
<li>[ ] Increase TCO</li>
<li>[ ] Increase time to market</li>
</ul>
</li>
<li><p>[ ] Step 5. Scale</p>
<ul>
<li>[ ] Organization wide</li>
<li>[ ] Critical workloads</li>
<li>[x] Global reach</li>
</ul>
</li>
<li><p>✍️ Accelerate your organization-wide adoption (Define | Optimize | Scale):</p>
<ul>
<li>[ ] Enable Cloud Center of Excellence (CCOE) for Cloud Native apps</li>
<li>[ ] Establish a Shared Services Platform (SSP)</li>
<li>[ ] Build a community</li>
</ul>
</li>
</ul>
<h2 id="heading-building-an-end-to-end-serverless-web-application">Building an End-to-End Serverless Web Application</h2>
<h3 id="heading-1-business-case-todo">1. Business Case: [TODO]</h3>
<h3 id="heading-2-frontend-reactjs-todo">2. Frontend: React.js: [TODO]</h3>
<h3 id="heading-3-backend-nodejs-todo">3. Backend: Node.js [TODO]</h3>
<h3 id="heading-4-iac-cdk-todo">4. IaC: CDK [TODO]</h3>
<h3 id="heading-5-open-source-todo-httpsgithubcomoceansoftioserverless">5. Open-Source [TODO]: https://github.com/OceanSoftIO/Serverless</h3>
]]></content:encoded></item><item><title><![CDATA[💡 Scaling up to your first 1 Million Users 🚦]]></title><description><![CDATA[🎯 Iterative Application Modernization Pattern & Strangler Pattern 📚
🎯 Deploy Your first Web Application in minutes on Heroku || AWS AppRunner & RDS (Free-Tier 🆓)
1. Iterative Application Modernization
Monolith to MicroServices: Many Monolithic Ap...]]></description><link>https://blog.oceansoft.io/scaling-up-to-your-first-1-million-users</link><guid isPermaLink="true">https://blog.oceansoft.io/scaling-up-to-your-first-1-million-users</guid><category><![CDATA[Cloud]]></category><category><![CDATA[scalability]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Fri, 14 Oct 2022 09:16:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665739831114/QnO8FRo0N.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🎯 Iterative Application Modernization Pattern &amp; Strangler Pattern 📚</p>
<p>🎯 Deploy Your first Web Application in minutes on Heroku || AWS AppRunner &amp; RDS (Free-Tier 🆓)</p>
<h2 id="heading-1-iterative-application-modernization">1. Iterative Application Modernization</h2>
<p>Monolith to MicroServices: Many <strong>Monolithic Applications</strong> generate revenue for your company by adding value to your customers. You may have heard statements such as "let's move to a <strong>MicroService-based Architecture</strong>", but "we must deal with the <strong>Data Tier</strong> first". </p>
<blockquote>
<p>How do we get there safely when we are heavily dependent on the application ⁉️</p>
</blockquote>
<p>✅ Martin Fowler’s <strong><a target="_blank" href="https://www.linkedin.com/posts/nnthanh_migrating-monolithic-applications-with-the-activity-6775946159061127169-XC9h">Strangler Pattern</a></strong> 📚: This methodology has been applied to moving specific data sets from a multi-terabyte Monolithic Database to a <strong>Purpose-built Database</strong> (SQL: MySQL / Postgres; NoSQL: MongoDB / DynamoDB), and utilizing a <strong>Data Lake</strong> to improve data access and performance.</p>
<ul>
<li>https://www.linkedin.com/posts/nnthanh_migrating-monolithic-applications-with-the-activity-6775946159061127169-XC9h</li>
<li>https://www.slideshare.net/SmartBizVN/migrating-monolithic-applications-with-the-strangler-pattern</li>
</ul>
<blockquote>
<p>[Example] eCommerce Breaking-down the Monolith (Data-tier and App-tier)</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665720524181/7qKcShTFY.gif" alt="Iterative Application Modernization Pattern.gif" /></p>
<ul>
<li>We have separated Shipping Service Data into a purpose-built database and a MicroService designed to handle shipping. With the newly developed Shipping Service, you will have the opportunity to develop MicroServices that focus on one job and do it extremely well.</li>
<li>The Orders Service will come next, and Inventory Service will get their own MicroServices.</li>
<li>Shopping Cart Service remain in the Monolith to ensure that customer service is not disrupted while the application is rearchitected. Once all monolithic capabilities have been replaced by MicroServices, we can now eliminate the monolith-app. Note that both Monoliths and MicroServices will coexist for a period of time.</li>
</ul>
<p>☑️ Additionally, you may also use <strong>Adapter pattern</strong> and <strong>Façade pattern</strong>.</p>
<h2 id="heading-2-devtest-users-andlt-10000">2. 🆓 [Dev/Test] Users &lt; 10,000</h2>
<ul>
<li>🥇 Multi-AZ</li>
<li>🥇 Elastic Load Balancing between tiers</li>
<li>🥇 Auto Scaling</li>
<li>🎖️ Service-Oriented Architecture (SOA): Split Tiers into individual SOA Services.</li>
</ul>
<h3 id="heading-lab-1-hosting-apps-on-heroku">Lab 1. Hosting Apps on Heroku</h3>
<blockquote>
<p>☑️ TODO: https://heroku.com/deploy?template=https://github.com/OceanSoftIO/ecommerce/</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665724324729/SMIu2bJgx.png" alt="Hosting-NodeJS-Apps-on-Heroku.png" /></p>
<p>⚠️ Heroku Free-Tier: <a target="_blank" href="https://www.heroku.com/pricing">Heroku Pricing</a> || <a target="_blank" href="https://thenewstack.io/where-can-heroku-free-tier-users-go/">Where Can Heroku Free Tier Users Go?</a></p>
<ul>
<li><p>[Example] This application has the following components:</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665736466205/5-J4yeZ71.png" alt="Hosting-NodeJS-Heroku.png" /></p>
<ul>
<li>Backend: Node.js REST API built with Express.js with resource endpoints that use Client to handle database operations against a PostgreSQL database (e.g., hosted on Heroku).</li>
<li>Frontend: Static HTML page to interact with the API.</li>
</ul>
</li>
</ul>
<h3 id="heading-lab-2-deploy-your-first-web-application-in-minutes">Lab 2. Deploy Your first Web Application in minutes</h3>
<ul>
<li><p>☑️ TODO - Build and deploy solutions on AWS using AWS App Runner and Amazon RDS 🆓: https://github.com/OceanSoftIO/Terraform/tree/feature/AppRunner</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665725359083/e0l2FuVwy.png" alt="Build and deploy solutions on AWS using AWS App Runner and Amazon RDS" /></p>
</li>
</ul>
<h2 id="heading-3-staging">3. 🥈 [Staging]</h2>
<ul>
<li>🥈 Serving content smartly (Cloud Storage/S3, CDN/Cloudfront)</li>
<li>🥈 Caching off databases</li>
<li>🥈 Moving state-off tiers that auto scale</li>
</ul>
<h2 id="heading-4-production">4. 🥉 Production</h2>
<ul>
<li><p>🥉 Monitoring, metrics, and logging: Deeply analyze your entire stack then fine-tuning of your application.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665737176210/XqQvPDyd0.gif" alt="AWS X-Ray - Visualize service call graph.gif" /></p>
</li>
<li><p>🥉 Going from Multi-AZ to Multi-Region</p>
</li>
<li><p>🥉 Database: </p>
<ul>
<li>Federation: Splitting into multiple databases based on function</li>
<li>Sharding: Splitting one data set across multiple hosts</li>
<li><p>Moving some functionality to other types of databases (NoSQL, Graph)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665739864518/h52YE_8eH.gif" alt="Users-1000000.gif" /></p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-5-next-steps">5. Next Steps ⚡</h2>
<ul>
<li><p>🏅 Service-Oriented Architecture (SOA) of features/functionality</p>
</li>
<li><p>🏅Build serverless whenever possible ⚡</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665737346497/bfa1aLMHO.gif" alt="The Micro-Services architecture.gif" /></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[🆓 [Open-Source] Medusa Headless-eCommerce Shopify alternative 🐳]]></title><description><![CDATA[🎯 Create Your Open-Source Ecommerce Store using Medusa (Backend), Gatsby (Admin) and Next.js (Storefront) ⚡
🎯  Deliverables:

🐳 Medusa eCommerce Backend: Node.js
⚡ Medusa eCommerce Admin: Gatsby
⚡ Medusa eCommerce Storefront: Next.js

🌥️ The clou...]]></description><link>https://blog.oceansoft.io/medusa-headless-ecommerce-shopify-alternative</link><guid isPermaLink="true">https://blog.oceansoft.io/medusa-headless-ecommerce-shopify-alternative</guid><category><![CDATA[medusa]]></category><category><![CDATA[ecommerce]]></category><category><![CDATA[shopify]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Thu, 13 Oct 2022 10:52:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665566704325/TxB7vXVt_.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🎯 Create Your Open-Source Ecommerce Store using Medusa (Backend), Gatsby (Admin) and Next.js (Storefront) ⚡</p>
<p>🎯  Deliverables:</p>
<ul>
<li>🐳 Medusa eCommerce Backend: <code>Node.js</code></li>
<li>⚡ Medusa eCommerce Admin: <code>Gatsby</code></li>
<li>⚡ Medusa eCommerce Storefront: <code>Next.js</code></li>
</ul>
<p>🌥️ The <em>cloud journey</em> generally involves migrating and modernizing websites and apps, including building and hosting websites, developing web and mobile apps, and monitoring and managing them. This hands-on series illustrates how to build a production-ready <strong>Headless CMS</strong> and <strong>Headless eCommerce</strong> using Jamstack (stands for JavaScript, API, and Markup) on Cloud (Heroku, AWS ...) 🌥 .</p>
<ol>
<li>✅ ⚡ <a target="_blank" href="https://blog.oceansoft.io/headless-cms-with-gatsby-contentful">Building a Production-Ready Headless CMS with Jamstack (Gatsby and Contentful)</a> 🎁</li>
<li>✅ 🐳 <a target="_blank" href="https://blog.oceansoft.io/strapi-nodejs-headless-cms">Dockerizing Strapi - Open-Source NodeJS Headless CMS</a></li>
<li>☑️ 🐳 <a target="_blank" href="https://blog.oceansoft.io/medusa-headless-ecommerce-shopify-alternative">Medusa Headless-eCommerce - Open-Source Shopify alternative</a> ⚡</li>
</ol>
<h2 id="heading-introduction-to-headless-ecommerce">Introduction to Headless eCommerce</h2>
<p>Medusa is an open-source API-first headless commerce platform giving engineers the foundation for building unique and scalable digital commerce projects quickly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665566751659/K3_fx41Rs.png" alt="Headless eCommerce Architecture" /></p>
<p>The Shopify eCommerce platform simplifies the creation of e-commerce stores for merchants and businesses who don't need to learn the technical details of setting up shops and want to get started quickly. However, Medusa is built for developers and focuses on providing a great developer experience with an abstraction-based architecture, ease of setup, strong documentation, and a supportive community. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665569164102/k8JHihoiP.png" alt="Medusa as an alternative to Shopify" /></p>
<h2 id="heading-headless-ecommerce-architecture">Headless eCommerce Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665566849520/naJ1OT1iP.png" alt="Classic vs. Headless Architecture" /></p>
<p>The Headless eCommerce platform collects, maintains, and distributes content without using the Frontend layer since it has been decoupled and eliminated, leaving only the Backend layer.</p>
<p><em>Backend Developers</em> can then provide and retrieve things like product items, blog posts, and product reviews to any device using <strong>REST/GraphQL APIs</strong>, while <em>Frontend Developers</em> can present that fetched data in their own beautiful format using whichever framework they prefer, such as <strong>ReactJS, VueJS, Angular</strong>, etc.</p>
<h2 id="heading-medusa-ecommerce-backend-nodejs">🐳 Medusa eCommerce Backend: <code>Node.js</code></h2>
<p>Like Shopify, Medusa offers a similar set of core ecommerce features, including payment and checkout, cart-functionality, fulfillment flow, shipping options, customer profiles (for customer-specific pricing), advanced promotions (such as coupons and discounts), and product and stock management. </p>
<p>Despite Shopify's simplicity, most of its pros and features are a result of its <em>monolithic architecture</em>, which is also a weakness.</p>
<p>Medusa's open-source and abstraction-based architecture allows you to customize and compose your store specifically to suit each individual use case. Using Medusa, you can alter the core set up to fit your needs, or you can extend Medusa's APIs to extend functionality.</p>
<pre><code class="lang-sh"><span class="hljs-built_in">echo</span> <span class="hljs-string">"Install the Medusa CLI"</span>
yarn global add @medusajs/medusa

medusa new backend
<span class="hljs-comment"># medusa new backend --seed</span>

<span class="hljs-built_in">cd</span> backend
<span class="hljs-comment"># medusa develop</span>
yarn start
</code></pre>
<h2 id="heading-medusa-ecommerce-admin-gatsby">⚡ Medusa eCommerce Admin: <code>Gatsby</code></h2>
<p>For non-technical store managers, an out-of-the-box admin panel provides built-in flows for claims, returns, and exchanges, allowing end-users to self-serve. Shopify, however, offers integrated marketing and sales analytics. The Medusa admin panel, however, offers greater customizability due to its extensible and composable architecture.</p>
<h2 id="heading-medusa-ecommerce-storefront-nextjs">⚡ Medusa eCommerce Storefront: <code>Next.js</code></h2>
<p>With Shopify, you can easily set up themed solutions that have a great starter package with a variety of available themes. Shopify provides Shopify Plus (starting at 2,000$/month) which allows developers to go headless through the Hydrogen setup while developing functionalities and completely customizing the storefront.</p>
<p>The Medusa Frontend and Backend are decoupled, so the storefront functionality and design can be customized without interfering with the Backend. Developers can then use <strong>Next.js</strong> or <strong>Gatsby</strong> or any other front-end framework of their choice.</p>
<ul>
<li><p><strong>Create Next.js Starter</strong>:</p>
<p>Open the terminal and use the following command to create an instance of your storefront:</p>
<pre><code>npx create-next-app -e https:<span class="hljs-comment">//github.com/medusajs/nextjs-starter-medusa storefront</span>

cd storefront
cp .env.template .env.local
</code></pre><p>Now you have a storefront codebase that is ready to be used with your Medusa server.</p>
</li>
<li><p><strong>Link Storefront to Your Server</strong></p>
<p>By default, the storefront is linked to the server at the URL localhost:9000. If you need to change that, create the file .env in the root of your Next.js starter and add a new variable:</p>
<p><code>NEXT_PUBLIC_MEDUSA_URL=&lt;BACKEND_URL&gt;</code></p>
<p>Make sure to replace  with the URL of your Medusa server.</p>
</li>
<li><p>Update the STORE_CORS variable</p>
<p>By default, the storefront runs at <a target="_blank" href="http://localhost:8000">localhost:8000</a> and the backend uses that URL to avoid CORS errors. If you need to change the URL or port, in .env file in the root of your Medusa Server add the following new variable:</p>
<p>STORE_CORS=</p>
<p>Make sure you replace  with the URL to your storefront.</p>
</li>
<li><p>Start your Store</p>
<p>To start your store, first, you need to run the Medusa server. In the directory that holds your Medusa server run the following:</p>
<p><code>yarn start</code></p>
<p>Then, in the directory that holds your Next.js storefront, run the following command:</p>
<p><code>yarn dev</code></p>
<p>Now, open the storefront at <a target="_blank" href="http://localhost:8000">localhost:8000</a> (or the URL/port you specified) and you’ll see your store and the products!</p>
</li>
</ul>
<p>🏁 Hooking up your Headless eCommerce Server with the Storefront is very easy using Medusa! You can now have your entire Server up and running with the products, cart, and checkout functionalities.</p>
]]></content:encoded></item><item><title><![CDATA[🆓 [Open-Source] Strapi  Node.js Headless CMS 🌏]]></title><description><![CDATA[🎯 An Open-Source NodeJS-based Content Management System with a fully customizable API. You can save time and effort by creating production-ready Node.js APIs in hours rather than weeks. 🚀
🎯  Deliverables:

Strapi CMS Backend Template: https://www....]]></description><link>https://blog.oceansoft.io/strapi-nodejs-headless-cms</link><guid isPermaLink="true">https://blog.oceansoft.io/strapi-nodejs-headless-cms</guid><category><![CDATA[cms]]></category><category><![CDATA[headless cms]]></category><category><![CDATA[Strapi]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[Docker compose]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Wed, 12 Oct 2022 04:50:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665478901979/bxKkiPyF5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🎯 An Open-Source NodeJS-based Content Management System with a fully customizable API. You can save time and effort by creating production-ready Node.js APIs in hours rather than weeks. 🚀</p>
<p>🎯  Deliverables:</p>
<ul>
<li>Strapi CMS Backend Template: https://www.npmjs.com/package/strapi-blog</li>
<li>Strapi CMS Backend: https://github.com/OceanSoftIO/cms-blog/tree/main/backend<ul>
<li><code>Dockerfile</code></li>
<li><code>Dockerfile.prod</code></li>
<li><code>docker-compose.yml</code></li>
</ul>
</li>
</ul>
<p>🌥️ The <em>cloud journey</em> generally involves migrating and modernizing websites and apps, including building and hosting websites, developing web and mobile apps, and monitoring and managing them. This hands-on series illustrates how to build a production-ready <strong>Headless CMS</strong> and <strong>Headless eCommerce</strong> using Jamstack (stands for JavaScript, API, and Markup) on Cloud (Heroku, AWS ...).</p>
<ol>
<li>✅ ⚡ <a target="_blank" href="https://blog.oceansoft.io/headless-cms-with-gatsby-contentful">Building a Production-Ready Headless CMS with Jamstack (Gatsby and Contentful)</a> 🎁</li>
<li>✅ 🐳 <a target="_blank" href="https://blog.oceansoft.io/strapi-nodejs-headless-cms">Dockerizing Strapi - Open-Source NodeJS Headless CMS</a></li>
<li>☑️ 🐳 <a target="_blank" href="https://blog.oceansoft.io/medusa-headless-ecommerce-shopify-alternative">Medusa Headless-eCommerce - Open-Source Shopify alternative</a> ⚡</li>
</ol>
<hr />
<h2 id="heading-1-overview-of-strapi-headless-cms">1. Overview of Strapi Headless CMS</h2>
<p>The Strapi Headless CMS is an Open-Source, Node.js platform for creating, managing, exposing and sharing content-rich experiences. </p>
<p>Traditional or monolithic Web-first CMS, such as WordPress, combine the frontend (website design and layout) and backend (the interface for editing and creating content) into a single application.
The next-generation Content-first Headless CMS use an API for content delivery and allow complete separation of the backend (creation and storage) and frontend (design and deployment). Headless architecture not only rewards better performance and flexibility but also provides stronger security by making it nearly impossible for end-users to access the backend.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665496191527/OBDXVr0mH.png" alt="Strapi Architecture.png" /></p>
<p>Developers are free to focus on designing amazing frontends that meet the needs of their customers because of all these benefits.</p>
<ul>
<li><em>Frontend</em>: Developers can use their preferred frontend technology to deliver high-quality content experiences while quickly integrating features like authentication (via social media logins), content delivery, and payment processing into a full-fledged business application. </li>
<li><em>REST &amp; GraphQL API</em>: The Strapi backend API makes content accessible and displayable on any device via a GraphQL or REST API. However, the exclusive focus is GraphQL API.</li>
<li><em>Database Independent</em>: A variety of database systems can be configured, including PostgreSQL, MySQL, MongoDB, and SQLite. However, we only configure Strapi for use with PostgreSQL.</li>
</ul>
<h2 id="heading-2-installing-strapi-cms-backendhttpsgithubcomoceansoftiocms-bloggit">2. Installing <a target="_blank" href="https://github.com/OceanSoftIO/cms-blog.git">Strapi CMS-Backend</a></h2>
<ul>
<li><p>🚦 Prerequisites</p>
<ul>
<li><a target="_blank" href="https://academy.job4u.io/setup-development-and-testing-environment-on-macos/">👨‍💻 Setup Development and Testing Environment on MacOS</a></li>
<li>✅ Node.js v16</li>
<li>✅ Yarn v1</li>
<li>☑️ Gatsby CLI</li>
</ul>
</li>
<li><p><em>Option 1.</em> Git clone from https://github.com/OceanSoftIO/cms-blog.git</p>
<pre><code>git clone https:<span class="hljs-comment">//github.com/OceanSoftIO/cms-blog.git</span>
cd cms-blog
</code></pre></li>
<li><p><em>Option 2.</em> Install <a target="_blank" href="https://www.npmjs.com/package/strapi-blog">Strapi CMS Backend</a> with the blog schema template outside of the frontend directory on your machine.</p>
<pre><code>echo <span class="hljs-string">"Strapi V4 template: https://github.com/OceanSoftIO/cms-blog/tree/main/template"</span>
yarn create strapi-app backend --quickstart --template strapi-blog

echo <span class="hljs-string">"Strapi V3 OLD-version !!!"</span>
# yarn create strapi-app cms --quickstart --template https:<span class="hljs-comment">//github.com/OceanSoftIO/cms-blog</span>
</code></pre></li>
<li><p>Following the installation, Strapi’s control panel will open in your browser, where you can register the admin user and create content.</p>
</li>
<li><a target="_blank" href="https://github.com/Academy4U/strapi-blog">Frontend (Gatsby or Next.js) integration</a></li>
</ul>
<h3 id="heading-21-powerful-cms-backend-graphql-apis">2.1. Powerful CMS-Backend GraphQL APIs</h3>
<pre><code class="lang-sh"><span class="hljs-built_in">cd</span> cms-blog/backend

yarn install
yarn develop
</code></pre>
<ul>
<li><p>🌏 http://localhost:1337/admin</p>
</li>
<li><p><a target="_blank" href="https://github.com/OceanSoftIO/cms/tree/main/postman">🎁 Installation Service &gt;&gt; 🔒 Postman-GraphQL 💲</a></p>
</li>
</ul>
<h3 id="heading-22-testing-and-deploying-the-strapi-api">2.2. Testing and Deploying the Strapi API</h3>
<h2 id="heading-3-docker-setup">3. Docker Setup</h2>
<blockquote>
<p>Developers are faced with the task of launching a development environment that has different software packages of certain versions. Fortunately, Docker solves this problem in the modern development world.</p>
<p>Creating <code>Dockerfile</code> &amp; <code>docker-compose.yml</code></p>
</blockquote>
<pre><code>  # cd cms-blog/backend
  # cat Dockerfile
  # cat docker-compose.yml
</code></pre><ul>
<li><p><a target="_blank" href="https://github.com/Academy4U/docker/blob/docker/strapi/strapi/Dockerfile">🐳 Dockerfile</a></p>
<blockquote>
<p>🐳 Creating a new <code>Dockerfile</code>: If you are using YARN, please use the following 👇</p>
</blockquote>
<pre><code class="lang-docker">FROM node:16
## Installing libvips-dev for sharp Compatability
RUN apt-get update &amp;&amp; apt-get install libvips-dev -y

# FROM node:16-alpine
## Installing libvips-dev for sharp Compatability
# RUN apk update &amp;&amp; apk add  build-base gcc autoconf automake zlib-dev libpng-dev nasm bash vips-dev

ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /opt/
COPY ./package.json ./yarn.lock ./
ENV PATH /opt/node_modules/.bin:$PATH
RUN yarn config set network-timeout 600000 -g &amp;&amp; yarn install
WORKDIR /opt/app
COPY ./ .
RUN yarn build
EXPOSE 1337

CMD ["yarn", "develop"]
</code></pre>
</li>
<li><p><a target="_blank" href="https://github.com/Academy4U/docker/blob/docker/strapi/strapi/Dockerfile.npm">🐳 Dockerfile.Prod</a></p>
<blockquote>
<p>🐳 Optimizing your <code>Dockerfile</code> ☠️, then please use the following 👇</p>
</blockquote>
<p>The following are the methods by which we can achieve docker image optimization.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1666085241940/qS61h1rdU.png" alt="image.png" /></p>
<ul>
<li>Using distroless/minimal base images</li>
<li>Multistage builds</li>
<li>Minimizing the number of layers</li>
<li>Understanding caching</li>
<li>Using Dockerignore</li>
<li>Keeping application data elsewhere</li>
</ul>
<pre><code class="lang-dockerfile">FROM node:16-alpine as build
## Installing libvips-dev for sharp Compatability
RUN apk update &amp;&amp; apk add build-base gcc autoconf automake zlib-dev libpng-dev vips-dev &amp;&amp; rm -rf /var/cache/apk/* &gt; /dev/null 2&gt;&amp;1
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /opt/
COPY ./package.json ./yarn.lock ./
ENV PATH /opt/node_modules/.bin:$PATH
RUN yarn config set network-timeout 600000 -g &amp;&amp; yarn install
WORKDIR /opt/app
COPY ./ .
RUN yarn build

FROM node:16-alpine
RUN apk add vips-dev
RUN rm -rf /var/cache/apk/*
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /opt/app
COPY --from=build /opt/node_modules ./node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY --from=build /opt/app ./
EXPOSE 1337
CMD ["yarn", "start"]
</code></pre>
</li>
<li><p>Quick tour of <code>Dockerfile</code>: 👇</p>
<blockquote>
<p>🐳 To get started, let's take a quick tour of Dockerfile: 👇</p>
</blockquote>
<ul>
<li>Initially, we'll use <code>node:16</code> (~330 MB) or <code>node:16-alpine</code> (~39 MB) as our base image. </li>
<li>We'll install some libraries, like <code>libvips-dev</code> for sharp compatibility, with <code>-y</code>, so say yes to everything.   </li>
<li>The node environment <strong>AVG</strong> will be set to <code>development</code> by default so we don't have to provide this each time. </li>
<li>The <strong>ENV</strong> allows us to override it if we want to switch from <code>development</code> to <code>production</code>.</li>
<li>We'll define our file paths and whatnot in the <code>/opt</code> WORKDIR working folder inside our container.</li>
<li>We copy package.json and yarn.lock (or package-lock.json if you're using npm) into our work directory. Docker caches each layer, so doing this first will speed up our build process.</li>
<li>Docker then knows where to find our node_modules</li>
<li>In case of network problems or a bit of slow internet, we will set a large timeout 600000 to allow extra time.</li>
<li>Afterwards, <code>yarn install</code> installs all dependencies.</li>
<li>Then we change the directories to /opt/apps</li>
<li>Next, we copy the project we created in step 1, <code>cms-backend</code>, into this folder.</li>
<li>We then run <code>yarn build</code> to build our MEAN project.</li>
<li>Finally, we expose port <code>1337</code> and tell Docker to run <code>yarn develop</code></li>
</ul>
</li>
</ul>
<ul>
<li><p><code>.dockerignore</code> 👇</p>
<blockquote>
<p>🐳 Docker Ignore: Create a file called <code>.dockerignore</code>: 👇</p>
</blockquote>
<pre><code class="lang-dockerignore">.tmp/
.cache/
public
.git/
build/
node_modules/
data/
</code></pre>
<p>✍️ These folders in <code>.dockerignore</code> will be skipped ⛔️ by Docker 🐳 since they are not necessary.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-building-andamp-running-the-docker-image">4. Building &amp; Running the Docker Image</h2>
<ul>
<li><ol>
<li>Building the Docker Image</li>
</ol>
<p><code>docker build -t cms-backend:latest .</code></p>
<ul>
<li>The name of the docker image is <code>cms-backend</code>, and it's tagged with <code>:latest</code></li>
<li>Lastly, grab a cup of coffee ☕️, normally a few minutes , and sit back while Docker does its magic 🪄</li>
</ul>
<p><code>docker system prune --all --force</code></p>
</li>
<li><ol>
<li>Running the Docker Image</li>
</ol>
<p><code>docker run -d -p 1337:1337 cms-backend</code></p>
<ul>
<li>Docker will run the image cms-backend, or whatever you called your project, 🤔 on port 1337.</li>
<li><code>-d</code> means detached and is a fancy way of saying "Runs in the background"</li>
<li><p>Tip: To use strapi on another port while developing, change the first part of the run port.</p>
<p><code>docker run -d -p 8888:1337 cms-backend</code></p>
<p><a target="_blank" href="http://localhost:8888/admin/">run on port 8888</a> 👍</p>
</li>
</ul>
<ul>
<li>Finally, <a target="_blank" href="http://localhost:1337/admin/">run on port 1337</a> 👍 </li>
</ul>
</li>
</ul>
<blockquote>
<p>✍️ We are currently using an SQLite database, which is always inside the container. Whenever we stop a container, we lose all changes. Using <code>docker-compose</code>, we can use a Postgres database and run multiple instances of Docker if needed.</p>
</blockquote>
<hr />
<h2 id="heading-5-utilizing-docker-compose-for-the-next-level">5. Utilizing <code>docker-compose</code> for the next level</h2>
<ul>
<li><p>⬆️ https://github.com/Academy4U/docker/blob/docker/strapi/README.Strapi.md</p>
</li>
<li><p>🪄 Think of <code>docker-compose</code> as a way to make different steps or services that we want to run.</p>
</li>
<li><p>🔔 https://github.com/Academy4U/docker/blob/docker/strapi/strapi/docker-compose.yml</p>
</li>
<li><p><code>config/database.js</code></p>
<blockquote>
<p>⚙️ <code>config/database.js</code> 👇</p>
</blockquote>
<pre><code class="lang-javascripts">const path = require('path');

// module.exports = ({ env }) =&gt; ({
//   connection: {
//     client: 'sqlite',
//     connection: {
//       filename: path.join(__dirname, '..', env('DATABASE_FILENAME', '.tmp/data.db')),
//     },
//     useNullAsDefault: true,
//   },
// });

/** PostgreSQL Database */
module.exports = ({ env }) =&gt; ({
 connection: {
   client: env("DATABASE_CLIENT", "postgres"),

   connection: {
     host:     env("DATABASE_HOST", "127.0.0.1"),
     port:     env.int("DATABASE_PORT", 5432),
     database: env("DATABASE_NAME", "cms"),
     user:     env("DATABASE_USERNAME", "cms"),
     password: env("DATABASE_PASSWORD", "cms"),
   },
   debug: false,
 },
});
</code></pre>
</li>
<li><p><code>.env</code></p>
<details>
<summary>⚙️ <code>.env</code> 👇</summary>

<code>env
HOST=0.0.0.0
PORT=1337

...

DATABASE_HOST=localhost
DATABASE_PORT=5432
# DATABASE_PORT=3306
DATABASE_NAME=cms
DATABASE_USERNAME=cms
DATABASE_PASSWORD=cms
NODE_ENV=development
DATABASE_CLIENT=postgres
# DATABASE_CLIENT=mysql</code>

</details>

<blockquote>
<p>🐳 In the root of the project, create a file called <code>docker-compose.yml</code>. Due to the YAML format, spacing matters, so I've used spaces rather than tabs 👇</p>
</blockquote>
<pre><code class="lang-dockerfile">version: "3"
services:
 cms:
   container_name: cms
   build: .
   image: cms:latest
   restart: unless-stopped
   env_file: .env
   environment:
     DATABASE_CLIENT: ${DATABASE_CLIENT}
     DATABASE_HOST: cmsDB
     DATABASE_NAME: ${DATABASE_NAME}
     DATABASE_USERNAME: ${DATABASE_USERNAME}
     DATABASE_PORT: ${DATABASE_PORT}
     JWT_SECRET: ${JWT_SECRET}
     ADMIN_JWT_SECRET: ${ADMIN_JWT_SECRET}
     DATABASE_PASSWORD: ${DATABASE_PASSWORD}
     NODE_ENV: ${NODE_ENV}
   volumes:
     - ./config:/opt/app/config
     - ./src:/opt/app/src
     - ./package.json:/opt/package.json
     - ./yarn.lock:/opt/yarn.lock ##Replace with package-lock.json if using npm
     - ./.env:/opt/app/.env
   ports:
     - "1337:1337"
   networks:
     - cms
   depends_on:
     - cmsDB

 cmsDB:
   image: postgres:12.0-alpine
   container_name: cmsDB
   platform: linux/amd64 ##for platform error on Apple M1 chips
   restart: unless-stopped
   env_file: .env
   environment:
     POSTGRES_USER: ${DATABASE_USERNAME}
     POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
     POSTGRES_DB: ${DATABASE_NAME}
   volumes:
     - cms-data:/var/lib/postgresql/data/ ##using a volume
     #- ./data:/var/lib/postgresql/data/  ##if you want to use a bind folder
   ports:
     - "5432:5432"
   networks:
     - cms

volumes:
   cms-data:

networks:
 cms:
   name: cms
   driver: bridge
</code></pre>
<blockquote>
<p>📚 I'll explain what all of this means: 👇</p>
</blockquote>
<ul>
<li><code>version</code> - <a target="_blank" href="https://docs.docker.com/compose/compose-file/compose-versioning/">Docker-compose version 3</a></li>
<li><code>services</code> - We are defining two services cms and cmsDB</li>
<li><code>cms</code> - The name of the service we defined</li>
<li><code>container_name</code> - The name of the container. You can call it whatever you want.</li>
<li><code>build</code> - Telling cms to build the image in our project folder <code>.</code>.</li>
<li><code>image</code> - The image name we want to build</li>
<li><code>restart</code> - Unless we STOP or take down the container, it will keep restarting.</li>
<li><code>env_file</code> - Providing a .env file containing the environmental variables we should keep secret. </li>
<li><code>environment</code> - Here we define all the variables we want to use. Our <code>.env</code> file will have <code>$[THISISOURNAME]</code> as a placeholder.</li>
<li><code>volumes</code> - mounting files into the container. Now this could be ./:/opt/app, but we might want to develop locally and just run our development server locally we are binding folders and some files to not bind node_modules There is some info about that here. </li>
<li><code>ports</code> - What ports we want to expose. Note: You can change the left side to another port, such as 8080:1337, but remember that the right side needs to be 1337, which is the port inside the container where CMS is running.</li>
<li><p><code>networks</code> - Set up a docker network so that our containers can communicate together. The Docker-Network tells Docker that before running the cms container, we need to run the postgresDB container first. This saves us some errors when we start the CMS container without a database.</p>
</li>
<li><p>Similarly, we give Postgres a name, but we use the official <code>postgres:12.0-alpine</code> image instead of building it ourselves. In addition, we are creating a volume called <code>cms-data</code> to hold our database.</p>
</li>
<li><p>✍️ When installing Docker Desktop for MacOS, docker-compose is also automatically installed; however, for Linux, you must separately install it.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-running-our-project">🚀 Running our project</h3>
<ul>
<li><p>[ ] 🐳 <strong>Local</strong>: This will now spin up just a Postgres database, and we can run and change files just like working on Strapi anywhere.</p>
<p><code>docker-compose up -d cmsDB &amp;&amp; yarn develop</code></p>
</li>
<li><p>[x] 🐳 <strong>Full</strong>: This will run Strapi inside a Docker Container and the database in its own container.</p>
<p><strong><code>docker-compose up -d</code></strong></p>
</li>
</ul>
<hr />
<h2 id="heading-next-steps">Next Steps</h2>
<ul>
<li>1️⃣ Backend Deployment using Render, Heroku, GCP, <code>AWS</code></li>
<li>2️⃣ Frontend: Gatsby Cloud, Netlify, <code>AWS Amplify</code></li>
<li>3️⃣ Infrastructure as Code: Build and Deploy Application to <code>AWS App Runner || ECS/EKS</code> using <code>Terraform</code> and <code>AWS CodePipeline</code></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[⚡ Building a Production-Ready Headless CMS with Jamstack (Gatsby and Contentful) 🎁]]></title><description><![CDATA[🎯 Implementing Headless Architecture for Content Management System (CMS) using Jamstack (JavaScript, APIs, and Markup) and Cloud Hosting (Gatsby Cloud, Netlify, AWS Amplify) ⚡

🚀 Live Demo: https://academy.job4u.io/
🌥️ The cloud journey generally ...]]></description><link>https://blog.oceansoft.io/headless-cms-with-gatsby-contentful</link><guid isPermaLink="true">https://blog.oceansoft.io/headless-cms-with-gatsby-contentful</guid><category><![CDATA[cms]]></category><category><![CDATA[headless cms]]></category><category><![CDATA[Gatsby]]></category><category><![CDATA[Contentful]]></category><category><![CDATA[architecture]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Tue, 11 Oct 2022 04:38:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665460767187/rT0EEbqZx.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>🎯 Implementing Headless Architecture for Content Management System (CMS) using Jamstack (JavaScript, APIs, and Markup) and Cloud Hosting (Gatsby Cloud, Netlify, AWS Amplify) ⚡</p>
</blockquote>
<p>🚀 Live Demo: https://academy.job4u.io/</p>
<p>🌥️ The <em>cloud journey</em> generally involves migrating and modernizing websites and apps, including building and hosting websites, developing web and mobile apps, and monitoring and managing them. This hands-on series illustrates how to build a production-ready <strong>Headless CMS</strong> and <strong>Headless eCommerce</strong> using Jamstack (stands for JavaScript, API, and Markup) on Cloud (Heroku, AWS ...).</p>
<ol>
<li>✅ ⚡ <a target="_blank" href="https://blog.oceansoft.io/headless-cms-with-gatsby-contentful">Building a Production-Ready Headless CMS with Jamstack (Gatsby and Contentful)</a> 🎁</li>
<li>☑️ 🐳 <a target="_blank" href="https://blog.oceansoft.io/strapi-nodejs-headless-cms">Dockerizing Strapi - Open-Source NodeJS Headless CMS</a></li>
<li>☑️ 🐳 <a target="_blank" href="https://blog.oceansoft.io/medusa-headless-ecommerce-shopify-alternative">Medusa Headless-eCommerce - Open-Source Shopify alternative</a> ⚡</li>
</ol>
<hr />
<h2 id="heading-incremental-architecture-approach">Incremental Architecture Approach</h2>
<p>We'll look at a traditional waterfall project timeline, in which all evaluation is done upfront, step by step, before approval and kickoff. After that, you add the content to the CMS, develop the site, do some launch preparation, and then launch it 🚀.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665459194818/rN9bCuwnE.png" alt="OLD - Traditional Project Timeline.png" /></p>
<p>Using incremental architecture, project timelines can be reduced by making architectural decisions during the project rather than delaying the start of the project. You can also synchronize work across multiple systems, such as Headless CMS, Headless eCommerce, and Analytics. </p>
<p>By mocking your content during initial development, you can add content to your CMS while developing your site. You can also add content to your CMS/eCommerce Backend while working on your frontend.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665459222315/b-keE60wo.png" alt="NEW - Incremental Architecture.png" /></p>
<h2 id="heading-headless-architecture-for-cms">Headless Architecture for CMS</h2>
<p>Traditional or monolithic <strong>Web-first CMS</strong>, such as WordPress, combine the frontend (website design and layout) and backend (the interface for editing and creating content) into a single application.</p>
<p>The next-generation <strong>Content-first Headless CMS</strong> use an API for content delivery and allow complete separation of the backend (creation and storage) and frontend (design and deployment). Headless architecture not only rewards better performance and flexibility but also provides stronger security by making it nearly impossible for end-users to access the backend.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665390185193/y2wsc7-El.jpeg" alt="Headless Architecture" /></p>
<h2 id="heading-headless-cms-with-jamstack">Headless CMS with Jamstack</h2>
<p>In the diagram below, Jamstack's typical website is made up of several types of systems, such as <a target="_blank" href="https://www.gatsbyjs.com/">Gatsby</a> and Contentful/Strapi CMS, that are fast, secure, and offer a great digital experience. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665390641217/vHUgTwNHy.png" alt="headless-cms-strapi-gatsby.png" /></p>
<p>By utilizing modular architecture, you can not only achieve better business results, but also do so in an accelerated timeframe. This proves to all your stakeholders and clients the value of new technologies.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1665460448331/hBpzdyCOx.png" alt="Modular Architecture" /></p>
<blockquote>
<p>Reference: <a target="_blank" href="https://www.contentful.com/blog/2022/06/07/ship-your-website-faster-with-incremental-architecture/">Ship your Contentful website faster with incremental architecture</a></p>
</blockquote>
<h3 id="heading-cloud-hosting-gatsby-cloud-netlify-aws-amplify">Cloud Hosting (Gatsby Cloud, Netlify, AWS Amplify)</h3>
<ul>
<li>✅ AWS Amplify</li>
<li>☑️ Gatsby Cloud</li>
<li>☑️ Netlify</li>
</ul>
<h3 id="heading-webhook-aws-amplify-andamp-contentful">[Webhook] AWS Amplify &amp; Contentful</h3>
<ul>
<li><p>[ ] .env --&gt; Amplify &gt; <code>Environment variables</code></p>
</li>
<li><p>[ ] 1. Create an incoming webhook to publish content updates. </p>
<ul>
<li>[x] Choose App Settings &gt; Build Settings &gt; Incoming webhooks, and then choose <code>Create webhook</code>. This webhook enables you to trigger a build in the Amplify Console on every POST to the HTTP endpoint.</li>
<li>[x] After you create the webhook, copy the URL (it looks like  https://webhooks.amplify.ap-southeast-1.amazonaws.com/prod/webhooks?id=XXX)</li>
</ul>
</li>
<li><p>[ ] 2. Go back to the Contentful dashboard, and choose Settings &gt; Webhooks. Then choose <code>Add Webhook</code>. Paste the webhook URL you copied from the Amplify Console into the URL section and update the Content Type to <code>application/json</code>. Choose <code>Save</code>.</p>
</li>
</ul>
<h2 id="heading-checklist-template">📚 Checklist Template 🎓</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>#</td><td>Feature</td><td>Description</td><td>Your Site</td></tr>
</thead>
<tbody>
<tr>
<td>01</td><td>🎯 Integrations</td><td>Enabling Contentful CMS Integration</td><td>✅</td></tr>
<tr>
<td>02</td><td></td><td>Enabling Algolia Integration (search system)</td><td>✅</td></tr>
<tr>
<td>03</td><td></td><td>Enabling Disqus/Graph/Facebook Comments Integration (blog commenting)</td><td></td></tr>
<tr>
<td>04</td><td></td><td>Enabling MailChimp integration (newsletter list building)</td><td></td></tr>
<tr>
<td>05</td><td></td><td>Enabling Web App Manifest</td><td>✅</td></tr>
<tr>
<td>06</td><td>🎯 SEO</td><td>Adding your Site's metadata SEO tags</td><td>✅</td></tr>
<tr>
<td>07</td><td></td><td>Enabling Google Analytics Tracking</td><td>✅</td></tr>
<tr>
<td>08</td><td></td><td>Enabling Automatic Sitemap Generation</td><td>✅</td></tr>
<tr>
<td>09</td><td>🎯 Branding</td><td>Adding your Brand's colors</td><td>✅</td></tr>
<tr>
<td>10</td><td></td><td>Adding your Social Media links</td><td>✅</td></tr>
<tr>
<td>11</td><td></td><td>Adding your Logo to header and footer</td><td>✅</td></tr>
<tr>
<td>12</td><td></td><td>Adding your Favicon</td><td>✅</td></tr>
<tr>
<td>13</td><td></td><td>Adding fonts to match your brand</td><td></td></tr>
<tr>
<td>14</td><td>🎯 Form Handling</td><td>Setting up Contact Form with Netlify Forms</td><td></td></tr>
<tr>
<td>15</td><td>🎯 Deployment Configuration</td><td>Setting up a Git repository</td><td></td></tr>
<tr>
<td>16</td><td></td><td>Deploying a site to Amplify / Netlify</td><td>Amplify</td></tr>
<tr>
<td>17</td><td></td><td>Configuring Your Site for Continuous Deployment via Git</td><td>✅</td></tr>
<tr>
<td>18</td><td></td><td>Enabling a Contentful CMS Hook for automatic site-building</td><td>✅</td></tr>
<tr>
<td>19</td><td></td><td>Setting up your custom domain</td><td>✅</td></tr>
<tr>
<td>20</td><td></td><td>Temporary URL</td><td>✅</td></tr>
</tbody>
</table>
</div><h2 id="heading-installation-service-gatsby-andamp-contentful-cms-blog">🎁 Installation Service: Gatsby &amp; Contentful CMS-Blog</h2>
<blockquote>
<p>💲 Service Fee: $500 (Gatsby Theme + Support + Installation Service)</p>
<p>⌛ Turnaround: 1 Business Day</p>
</blockquote>
<p>🎯 Within 1 business day, you will receive the source code for your Amplify/Netlify-hosted website, as well as a temporary URL.</p>
]]></content:encoded></item><item><title><![CDATA[📚 [Solution Architecture] Docs-as-Code]]></title><description><![CDATA[🎯 The exploration and experimentation of how documents can be transformed into valuable sources of information. 📚 
🚀 Live Demo: solution.job4u.io
👨‍💻 https://oceansoftio.github.io/docs


Source Code: https://github.com/OceanSoftIO/docs.git

http...]]></description><link>https://blog.oceansoft.io/docs-as-code</link><guid isPermaLink="true">https://blog.oceansoft.io/docs-as-code</guid><category><![CDATA[Hugo]]></category><category><![CDATA[mkdocs]]></category><category><![CDATA[documentation]]></category><category><![CDATA[JAMstack]]></category><category><![CDATA[github-actions]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Mon, 10 Oct 2022 04:33:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1665303324738/Spy3sOM0B.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ul>
<li>🎯 The exploration and experimentation of how documents can be transformed into valuable sources of information. 📚 </li>
<li>🚀 Live Demo: <a target="_blank" href="https://solution.job4u.io">solution.job4u.io</a></li>
<li>👨‍💻 <a target="_blank" href="https://oceansoftio.github.io/docs">https://oceansoftio.github.io/docs</a></li>
</ul>
<blockquote>
<p>Source Code: <a target="_blank" href="https://github.com/OceanSoftIO/docs.git">https://github.com/OceanSoftIO/docs.git</a></p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=6PWPyQaeiG0">https://www.youtube.com/watch?v=6PWPyQaeiG0</a></div>
<hr />
<h2 id="heading-what-is-docs-as-code">WHAT is Docs-as-Code</h2>
<ul>
<li><p>✍️ Documentation as Code (Docs-as-Code) is a growing trend in software documentation, in which documentation is written using the same tools as code:</p>
<ul>
<li>Issue Trackers, Version Control (Git), Code Reviews: <code>Github</code></li>
<li>Plain Text Markup (Markdown, Asciidoc, reStructuredText): <code>Markdown</code></li>
<li>CI/CD, Automated Tests: <code>AWS Amplify</code></li>
</ul>
</li>
<li><p>🎁 The product team needs to follow the same workflows as the development teams. As a result, both writers and developers feel ownership of documentation and work together to create a valuable source of information:</p>
<ul>
<li>Adopt an “agile” approach to content creation</li>
<li>Content is the responsibility of the entire team, not just the technical writers</li>
<li>The culture of adapting and improving content and processes over time. </li>
</ul>
</li>
</ul>
<h2 id="heading-why-we-like-docs-as-code">WHY We like Docs-as-Code ?</h2>
<p>In general, Docs-as-Code offers the following benefits for Technical Documentation:</p>
<ol>
<li>By integrating with the Development Team more effectively, the Release Technical Writer can deliver higher-quality documentation (information architecture, customer experience, etc.) more quickly through collaboration with multiple voices. </li>
<li>Developers often write the first draft of documentation, avoiding the documentation becoming a bottleneck. The writers then define and automate the process, including approval gates, automated quality checks, such as spelling and grammar, and external content publishing, such as <a target="_blank" href="https://solution.job4u.io">https://solution.job4u.io</a> or <a target="_blank" href="https://oceansoftio.github.io/docs">https://oceansoftio.github.io/docs</a>.</li>
<li>New features can't be merged if they don't include documentation, which incentivizes developers to document them immediately. Furthermore, Documents-as-Code eliminates the need for proprietary tools for technical writing and publishing.</li>
</ol>
<h2 id="heading-how-to-create-and-manage-quality-documentation">HOW to Create and Manage Quality Documentation</h2>
<p>The above diagram depicts a simple content authoring, review, and publishing workflow:</p>
<ol>
<li>The documentation is located in the docs folder in the docs branch. We use an IDE like VSCode to create markdown files as well as extensions for diagramming tools like DrawIO, so you can also version control your diagrams!</li>
<li>Authors publish the branch to the GitHub source control system. Technically, they don't have to do this every time, but it's good practice as a backup.</li>
<li>Authors create a Pull Request (PR) when they are ready to submit content changes.</li>
<li>Content Editors will receive a notification when a new PR is published. Upon reviewing the content, the Editor can approve or reject the submission; the Editor can add comments so the author knows what must be approved.</li>
<li>Once a PR is created/updated (following feedback), an automated pipeline will execute various quality checks against the content.</li>
<li>Once the editor has approved the PR, the content can be merged into the main branch.</li>
<li>Content from the main branch can be built and deployed automatically to users using <code>AWS Amplify</code> CI/CD (Continuous Integration and Continuous Delivery) Pipeline.</li>
</ol>
<h2 id="heading-adopting-docs-as-code-from-hackathon-to-production">Adopting Docs-as-Code: from Hackathon to Production</h2>
<p>Ideally, the architecture document should be prepared in <strong>Markdown</strong> format and managed in a <strong>Git</strong> repository to ensure high quality, manageability, version control, and traceability. </p>
<p>We use the following developer tools and processes to create and deliver content:</p>
<ul>
<li><p>Documentation template: <strong>VS Code</strong> || Intellij</p>
<ul>
<li>[x] <strong><a target="_blank" href="https://www.markdownguide.org/">Markdown *.md</a></strong>: https://solution.job4u.io/</li>
<li>[x] <a target="_blank" href="https://asciidoc-py.github.io/index.html">Ascidoc</a>: https://github.com/OceanSoftIO/docs/asciidoc</li>
<li>[ ] <a target="_blank" href="https://docutils.sourceforge.io/rst.html">Restructured Text *.rst</a> </li>
</ul>
</li>
<li><p>Developer-based workflows: <strong>GitHub</strong> || GitLab</p>
<ul>
<li>[x] Version Control using tools, such as Git</li>
<li>[x] Change control driven though bugs and feature requests tickets</li>
<li>[x] Content reviews and merges</li>
</ul>
</li>
<li><p>Static Site Generators (SSG) </p>
<ul>
<li>[x] Mkdocs --&gt; <a target="_blank" href="https://solution.job4u.io/">https://solution.job4u.io</a></li>
<li>[x] Hugo     --&gt; <a target="_blank" href="https://terraform.job4u.io/">https://terraform.job4u.io/</a> || <a target="_blank" href="https://cdk.job4u.io/">https://cdk.job4u.io/</a></li>
<li>[ ] [Jekyll](https://jekyllrb.com/</li>
<li>[ ] <a target="_blank" href="https://docusaurus.io/">Docusaurus</a></li>
<li>[ ] Sphinx (reStructuredText)</li>
<li>[ ] <a target="_blank" href="https://middlemanapp.com/">Middleman</a></li>
</ul>
</li>
<li><p>CI/CD Pipeline:</p>
<ul>
<li>[x] AWS Amplify</li>
<li>[x] <a target="_blank" href="https://github.com/OceanSoftIO/docs/tree/main/.github/workflows">GitHub Actions</a>: <a target="_blank" href="https://oceansoftio.github.io/docs">https://oceansoftio.github.io/docs</a></li>
<li>[ ] GitLab Pipeline</li>
<li>[ ] Netlify</li>
</ul>
</li>
<li><p>Script-Based Design Tools &amp; Architecture Templates</p>
<ul>
<li>[x] Mermaid (UML): https://mermaid-js.github.io/mermaid/#/</li>
<li>[ ] PlantUML (UML, C4, Mindmap) : https://plantuml.com/</li>
<li>[ ] Structurizr (C4) : https://structurizr.com/</li>
</ul>
</li>
</ul>
<h2 id="heading-docs-for-agile-delivery-in-practice">Docs for Agile Delivery in Practice</h2>
<ul>
<li><p>[x] Installing MkDocs &amp; Material Design theme for MkDocs</p>
<pre><code class="lang-sh">pip install mkdocs mkdocs-material
mkdocs new docs
<span class="hljs-comment"># ~/Library/Python/3.9/bin/mkdocs new docs</span>
</code></pre>
</li>
<li><p>✅ Branding</p>
<ul>
<li>[x] Adding your Logo to Header</li>
<li>[x] Adding your Favicon</li>
<li>[x] Adding your Social Media links to Footer</li>
<li>[x] Adding your Brand's colors</li>
<li>[ ] Adding fonts to match your brand</li>
</ul>
</li>
</ul>
<p>✅ Markdown Extensions</p>
<ul>
<li>[x] <a target="_blank" href="https://oceansoftio.github.io/docs/mkdocs-alternatives/">https://oceansoftio.github.io/docs/mkdocs-alternatives</a></li>
</ul>
<p>✅ AWS Amplify CI/CD Pipeline</p>
<ul>
<li>[x] <code>amplify.yml</code></li>
<li>[x] Amplify &gt;&gt; Previews</li>
<li>[x] Amplify &gt;&gt; Notifications</li>
</ul>
<h2 id="heading-live-demo">Live Demo:</h2>
<h3 id="heading-mkdocs">MkDocs</h3>
<blockquote>
<p>⚡ You can create an online workshop in one minute using this open-source Jamstack Static Site Generator (SSG) Template ⏱️</p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=_MUOy2i_L3Y">https://www.youtube.com/watch?v=_MUOy2i_L3Y</a></div>
<pre><code><span class="hljs-string">``</span><span class="hljs-string">`
git clone https://github.com/OceanSoftIO/docs
cd mkdocs

mkdocs serve
`</span><span class="hljs-string">``</span>
</code></pre><blockquote>
<p>✅ <a target="_blank" href="https://solution.job4u.io">solution.job4u.io</a></p>
<p>🔗 <a target="_blank" href="https://oceansoftio.github.io/docs">https://github.com/OceanSoftIO/docs/tree/main/mkdocs</a></p>
</blockquote>
<h3 id="heading-hugo">Hugo</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=CWjxIe9bfcM">https://www.youtube.com/watch?v=CWjxIe9bfcM</a></div>
<pre><code><span class="hljs-string">``</span><span class="hljs-string">`
git clone -b hugo https://github.com/OceanSoftIO/docs
cd docs/hugo
git submodule init &amp;&amp; git submodule update --checkout --recursive

hugo server --port 8080
`</span><span class="hljs-string">``</span>
</code></pre><blockquote>
<p>🔗 http://localhost:8080/</p>
</blockquote>
<h3 id="heading-references">References</h3>
<ol>
<li><a target="_blank" href="https://www.amazon.com/Docs-Like-Code-Anne-Gentle/dp/1387081322/">Docs Like Code - Anne Gentle</a></li>
<li><a target="_blank" href="https://www.amazon.com/Modern-Technical-Writing-Introduction-Documentation-ebook/dp/B01A2QL9SS">Modern Technical Writing - Andrew Etter</a></li>
<li><a target="_blank" href="https://www.youtube.com/watch?v=Cxuo3udElcE">Amazon Web Services (AWS)</a> told the story of their team’s move to docs as code: what worked, what didn’t, what’s next.</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[👨‍💻 Local Development Environment for Data Science and Machine Learning 🤖]]></title><description><![CDATA[🎯 Reproducible local Internal Development Platform (IDP) for developing and testing Data Science and Machine Learning projects 🚀
1. MacOS Settings

✅ Show your Mac's hidden files

Find Terminal under Launchpad > Other > Terminal, then run the follo...]]></description><link>https://blog.oceansoft.io/internal-development-platform-for-data-science-and-machine-learning</link><guid isPermaLink="true">https://blog.oceansoft.io/internal-development-platform-for-data-science-and-machine-learning</guid><category><![CDATA[macOS]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[OceanSoft]]></dc:creator><pubDate>Sun, 09 Oct 2022 09:38:05 GMT</pubDate><content:encoded><![CDATA[<p>🎯 Reproducible local Internal Development Platform (<strong>IDP</strong>) for developing and testing <strong>Data Science</strong> and <strong>Machine Learning</strong> projects 🚀</p>
<h2 id="heading-1-macos-settings">1. MacOS Settings</h2>
<ul>
<li><p>✅ <mark>Show your Mac's hidden files</mark></p>
<ul>
<li><p>Find <strong>Terminal</strong> under <strong>Launchpad &gt; Other &gt; Terminal</strong>, then run the following commands:</p>
</li>
<li><p>Type <code>defaults write com.apple.Finder AppleShowAllFiles true</code> and press <strong>Enter</strong></p>
</li>
<li><p>Type <code>killall Finder</code> and press <strong>Enter</strong> again</p>
</li>
</ul>
</li>
<li><p>✅ Create APFS Volumes &amp; Setup Workplace Folder</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Change to your user directory</span>
  <span class="hljs-built_in">cd</span> ~
  <span class="hljs-comment">## Check if the workplace folder exists</span>
  ls -l workplace
  <span class="hljs-comment">## If you see "workplace -&gt; /Volumes/Workplace" continute to Part 4</span>
  <span class="hljs-comment">## If you see "ls: workplace: No such file or directory" create the symlink</span>
  ln -s /Volumes/Workplace ~/workplace
  <span class="hljs-comment">## Otherwise, if you see some other output, you already have a workplace folder but it is not linked to the encrypted volume. </span>
  <span class="hljs-comment">## You may want to consider moving that content to a new folder (eg workplace_old) and then create the symlink with the above command. </span>
  <span class="hljs-comment">## This will make following future commands and guides easier since they all assume you have the workplace folder.</span>
  <span class="hljs-comment">## Ask for help if you need it since getting this wrong will make the rest of the guide much harder.</span>
  <span class="hljs-comment">## Check if the workplace folder symlink is correct</span>
  ls -l workplace
  <span class="hljs-comment">## ✅ You should see "workplace -&gt; /Volumes/Workplace"</span>
</code></pre>
<blockquote>
<p><strong>⚠️ WARNING:</strong> The <code>Disk Utility</code> program should now have the volumes, make sure under the name it says: <code>System</code>: APFS Volume • <code>Workplace</code>: APFS (Encrypted)</p>
</blockquote>
</li>
<li><p>✅ Installing Homebrew Package Managers</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Install Homebrew from the Git repository</span>
  /bin/bash -c <span class="hljs-string">"<span class="hljs-subst">$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)</span>"</span>

  <span class="hljs-comment">## Follow along with the prompts to complete the installation. </span>
  <span class="hljs-comment">## You may want to restart the Terminal after installation to make sure the PATH variable is set correctly</span>
  <span class="hljs-comment">## If you see a warning during installation such as</span>
  <span class="hljs-comment">## - Run these two commands in your terminal to add Homebrew to your PATH:</span>
  (<span class="hljs-built_in">echo</span>; <span class="hljs-built_in">echo</span> <span class="hljs-string">'eval "$(/opt/homebrew/bin/brew shellenv)"'</span>) &gt;&gt; ~/.zshrc

  <span class="hljs-built_in">eval</span> <span class="hljs-string">"<span class="hljs-subst">$(/opt/homebrew/bin/brew shellenv)</span>"</span>

  <span class="hljs-comment">## Install Ruby to use Amazon Homebrew formulas without sudo</span>
  <span class="hljs-comment"># brew install ruby</span>

  <span class="hljs-comment">## REQUIRED: Turn off Homebrew analytics</span>
  brew analytics off

  brew update
  brew upgrade
  <span class="hljs-comment"># brew list</span>
  <span class="hljs-comment"># /usr/bin/python3</span>
</code></pre>
</li>
<li>✅ <mark>Invoke Apple’s Software Update Tool</mark></li>
</ul>
<pre><code class="lang-sh">softwareupdate --install -a
</code></pre>
<ul>
<li><p>✅ Updating Git</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Install an updated version of Git</span>
  <span class="hljs-comment"># brew install git</span>
  <span class="hljs-comment"># sudo xcode-select -switch /Library/Developer/CommandLineTools</span>

  <span class="hljs-comment">## Check if your username and email are configured correctly in Git</span>
  git config --list
  <span class="hljs-comment">## If either your username or email is not set properly, then update it with the respective command</span>
  git config --global user.name <span class="hljs-string">"Thanh Nguyen"</span>
  git config --global user.email nnthanh101@gmail.com
</code></pre>
</li>
<li><p><strong>✍️ NOTE:</strong> This guide assumes you are using ZSH as your shell.</p>
<ul>
<li><p>✅ If running <code>echo $SHELL</code> in your Terminal returns <code>/bin/zsh</code>, then you shouldn’t run into any issues.</p>
</li>
<li><p>[ ] If you are using Bash (<code>/bin/bash</code> is returned instead), then change <code>~/.zshrc</code> to <code>~/.bash_profile</code> whenever you are exporting variables.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-2-docker-desktop-amp-vscode">2. Docker Desktop &amp; VSCode</h2>
<h3 id="heading-21-install-docker-desktophttpswwwdockercomproductsdocker-desktop">2.1. Install <a target="_blank" href="https://www.docker.com/products/docker-desktop/">Docker Desktop</a></h3>
<ul>
<li><p>✅ MacBook-Pro Resource Settings:</p>
<ul>
<li><p>MacBook - 8 vCPU, 16GB Memory, 250GB SSD Disk</p>
</li>
<li><p>Docker - 6 vCPU, 8GB Memory, Swap 1GB, 56GB Virtual-Disk</p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1702689214763/fd4869b5-a319-4730-b049-92d250700648.png" alt /></p>
<p>Docker is a tool used to run containerized applications. In the event that you require it for a project, Docker can be installed by following the instructions at</p>
<p>The system resources reserved for Docker should be modified during installation (2 CPUs and 2 GB RAM). If you are only running a few containers at a time then the default settings will be adequate. This will also leave more resources for your actual MacBook to consume if you leave Docker running in the background.</p>
<blockquote>
<p><code>docker system prune --all --force</code></p>
</blockquote>
<h3 id="heading-22-installing-visual-studio-codehttpscodevisualstudiocomdownload">2.2. Installing <a target="_blank" href="https://code.visualstudio.com/download">Visual Studio Code</a></h3>
<ul>
<li><p>Visual Studio Code Extensions</p>
<ul>
<li><p>✅ <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers"><strong>Dev Containers</strong></a></p>
</li>
<li><p>☑️ <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker">Docker [ms-azuretools.vscode-docker]</a></p>
</li>
<li><p>☑️ GitLens [eamodio.gitlens]</p>
</li>
<li><p>✅ Jupyter [wholroyd.jinja]</p>
</li>
<li><p>✅ Kubernetes [ms-kubernetes-tools.vscode-kubernetes-tools]</p>
</li>
<li><p>[ ] Pylance [ms-python.vscode-pylance]</p>
</li>
<li><p>✅ <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-python.python">Python</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh">Remote-ssh [ms-vscode-remote.remote-ssh]</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers">Remote - Containers [ms-vscode-remote.remote-containers]</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=dsznajder.es7-react-js-snippets">ES7+ React/Redux/React-Native snippets</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=GrapeCity.gc-excelviewer">Excel Viewer</a></p>
</li>
<li><p>[ ] <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode">Prettier - Code formatter</a></p>
</li>
<li><p>HashiCorp Terraform [4ops.terraform]</p>
</li>
</ul>
</li>
</ul>
<p><a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html">Lambda runtimes</a></p>
<ul>
<li>Installing <a target="_blank" href="https://www.sourcetreeapp.com/">SourceTree Git</a></li>
</ul>
<pre><code class="lang-plaintext">git clone https://github.com/nnthanh101/Machine-Learning

cd Machine-Learning
code .
</code></pre>
<h3 id="heading-23-installing-web-browsers">2.3. Installing Web Browsers:</h3>
<p>* ✅ <a target="_blank" href="https://www.google.com/intl/en_uk/chrome/">Chrome</a></p>
<p>* ✅ <a target="_blank" href="https://brave.com/">Brave</a></p>
<p>* ☑️ <a target="_blank" href="https://www.mozilla.org/en-GB/firefox/new/">Firefox</a></p>
<ul>
<li><p>Install Web Browser Extensions (chromium)</p>
<ul>
<li><p>✅ <a target="_blank" href="https://chrome.google.com/webstore/detail/json-formatter/bcjindcccaagfpapjjmafapmmgkkhgoa">JSON Formatter</a></p>
</li>
<li><p>✅ <a target="_blank" href="https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi">React Developer Tools</a></p>
</li>
<li><p>✅ <a target="_blank" href="https://chrome.google.com/webstore/detail/multi-elasticsearch-head/cpmmilfkofbeimbmgiclohpodggeheim">Multi Elasticsearch Head</a></p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-3-install-utilities">3. Install Utilities</h2>
<ul>
<li><p>Install Software</p>
<ul>
<li><p>☑️ <a target="_blank" href="https://www.figma.com/">Figma</a></p>
</li>
<li><p>☑️ <a target="_blank" href="https://www.mongodb.com/try/download/compass">MongoDB Compass</a></p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-software-developers-tech-stack">Software Developer's Tech Stack</h2>
<ul>
<li><p><strong>Docker</strong></p>
</li>
<li><p><strong>Front-End</strong>:</p>
<ul>
<li><p>✅ TypeScript, HTML/CSS/JavaScript</p>
</li>
<li><p>✅ React, React Native, Next.js</p>
</li>
</ul>
</li>
<li><p><strong>Back-End</strong>:</p>
<ul>
<li><p>✅ NodeJS 18.x</p>
</li>
<li><p>✅ Python 3.10.12</p>
</li>
<li><p>Redis</p>
</li>
<li><p>[ ] SQL: SQLite, MySQL/MariaDB, Postgres</p>
</li>
<li><p>[ ] NoSQL: MongoDB, DynamoDB</p>
</li>
</ul>
</li>
<li><p><strong>Data Science</strong>:</p>
<ul>
<li><p>Python (pyenv)</p>
</li>
<li><p>Scikit-Learn</p>
</li>
<li><p>Tensorflow</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-install-python">Install Python</h2>
<blockquote>
<p><strong>⛔️ Do not mess with your system Python:</strong> Avoid using or installing packages to ‘system Python’, the Python program already installed on your system.</p>
<p>✅ Prefer to use <a target="_blank" href="https://github.com/pyenv/pyenv#homebrew-in-macos"><strong>pyenv in macOS</strong></a> to manage my Python versions and virtual environments.</p>
<p>Note: if python-build fails due to “zipimport.ZipImportError: can’t decompress data; zlib not available” go here  first.</p>
<p>Bash note: if using Bash then change <code>~/.zshrc</code> to <code>~/.bash_profile</code> here.</p>
</blockquote>
<h3 id="heading-install-pyenv">Install pyenv</h3>
<pre><code class="lang-sh"><span class="hljs-comment">## Install the prerequisites from Homebrew: https://github.com/pyenv/pyenv#homebrew-in-macos</span>
brew update
brew install xz pyenv pyenv-virtualenv

<span class="hljs-comment"># pip3 install --user pipenv</span>
<span class="hljs-comment"># pip3 install --user --upgrade pipenv</span>

<span class="hljs-comment">## Set up your shell environment for Pyenv</span>
<span class="hljs-comment">## Initialise pyenv when loading a new session</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">'eval "$(pyenv init -)"'</span> &gt;&gt; ~/.zshrc
<span class="hljs-comment"># if which pyenv-virtualenv-init &gt; /dev/null; then eval "$(pyenv virtualenv-init -)"; fi</span>
</code></pre>
<h3 id="heading-create-python-3-venv">Create Python 3 Venv</h3>
<pre><code class="lang-sh"><span class="hljs-comment">## Create a workspace for your development work</span>
mkdir ~/workplace/&lt;WORKSPACE_NAME&gt;
<span class="hljs-comment">## Change into the workspace directory</span>
<span class="hljs-built_in">cd</span> ~/workplace/&lt;WORKSPACE_NAME&gt;

<span class="hljs-comment">## List the Python versions you have installed with pyenv</span>
pyenv -v

<span class="hljs-comment">## If you don't have the version installed that you want to use then list all versions available to install</span>
pyenv install --list

<span class="hljs-comment">## Install the version you want to use (Lambda runtimes) - this takes some time</span>
pyenv install 3.12.1
<span class="hljs-comment"># ls ~/.pyenv/versions/</span>

<span class="hljs-comment">## Select globally for your user account</span>
pyenv global 3.12.1
<span class="hljs-comment">## Set the local Python version within the workspace (current directory or subdirectories)</span>
<span class="hljs-comment"># pyenv local &lt;VERSION&gt;</span>

<span class="hljs-comment">## Make sure you are using the correct Python version: python --version</span>
python -V
</code></pre>
<h3 id="heading-handling-virtualenv-using-pyenv-virtualenv">Handling virtualenv using pyenv-virtualenv</h3>
<pre><code class="lang-sh"><span class="hljs-comment">## Installing pyenv-virtualenv for virtual environment management</span>
<span class="hljs-comment"># brew install pyenv-virtualenv</span>

<span class="hljs-comment">## Run this command to add a new line to your to .zshrc</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">'eval "$(pyenv virtualenv-init -)"'</span> &gt;&gt; ~/.zshrc

<span class="hljs-comment">## Create new virtualenv (e.g. pyenv virtualenv &lt;python-version&gt; &lt;env-name&gt;)</span>
pyenv virtualenv 3.12.1 analytics

<span class="hljs-comment">## Activate the virtualenv</span>
pyenv activate analytics

<span class="hljs-comment">## List all available virtual environments</span>
pyenv virtualenvs
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment">## Create the Python virtual environment and store it in the "env" directory</span>
python - m venv venv
<span class="hljs-comment">## Activate the virtual environment - you must do this every time you start a new shell.</span>
<span class="hljs-comment">## You can tell you are in the virtual environment if you see</span>
<span class="hljs-comment">## (env) at the beginning of your Terminal line</span>
<span class="hljs-built_in">source</span> env/bin/activate
<span class="hljs-comment">## You are now ready to create Python code within a virtual environment.</span>
<span class="hljs-comment">## Running pip install will install packages to your "env" directory and</span>
<span class="hljs-comment">## will not make changes to your system packages.</span>

<span class="hljs-comment">## After you are finished working in the virtual environment you can deactivate it.</span>
deactivate
</code></pre>
<h3 id="heading-jupyterlab">JupyterLab</h3>
<pre><code class="lang-bash"><span class="hljs-comment">## Activate virtual environment if not already activated</span>
pyenv activate analytics

<span class="hljs-comment">## Install JupyterLab into your virtual environment</span>
pip install jupyterlab

<span class="hljs-comment">## Open jupyter lab</span>
jupyter lab
</code></pre>
<pre><code class="lang-sh">

<span class="hljs-comment">## Reload your environment</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"alias python=/usr/bin/python3"</span> &gt;&gt; ~/.zshrc
<span class="hljs-built_in">echo</span> <span class="hljs-string">"alias pip=/usr/bin/pip3"</span> &gt;&gt; ~/.zshrc

<span class="hljs-built_in">source</span> ~/.zshrc
<span class="hljs-comment">## If using bash</span>
<span class="hljs-comment"># source ~/.bash_profile</span>
</code></pre>
<h2 id="heading-install-nodejs">Install NodeJS</h2>
<pre><code class="lang-sh">curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

<span class="hljs-built_in">echo</span> <span class="hljs-string">'export NVM_DIR="$HOME/.nvm"'</span> &gt;&gt; ~/.zshrc
<span class="hljs-built_in">echo</span> <span class="hljs-string">'[ -s "$NVM_DIR/nvm.sh" ] &amp;&amp; \. "$NVM_DIR/nvm.sh"'</span> &gt;&gt; ~/.zshrc

<span class="hljs-comment"># nvm ls-remote --lts</span>
nvm install --lts=Iron
nvm use --lts=Iron
<span class="hljs-comment"># nvm alias default 20.10.0</span>

node -v
npm -v
</code></pre>
<pre><code class="lang-plaintext">npm install -g yarn aws-cdk

yarn -v
cdk --version
</code></pre>
<h2 id="heading-setup-java">[ ] Setup Java</h2>
<blockquote>
<p>Bash note: if using Bash then change <code>~/.zshrc</code> to <code>~/.bash_profile</code> here.</p>
</blockquote>
<pre><code class="lang-sh"><span class="hljs-comment">## Add JAVA_HOME to your environment permanently - version 11 is currently recommended</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"export JAVA_HOME=/Library/Java/JavaVirtualMachines/amazon-corretto-11.jdk/Content
## Verify that this is in ~/.zshrc or ~/.bash_profile
cat ~/.zshrc
## OR
# cat ~/.bash_profile</span>
</code></pre>
<h2 id="heading-install-ruby-languages">[ ] Install Ruby Languages</h2>
<pre><code class="lang-sh"><span class="hljs-comment">## Install the prerequisites from Homebrew</span>
brew install rbenv libyaml libffi

<span class="hljs-comment">## Set up the recommended Ruby version for Brazil</span>
ruby-build 2.5. 8  ~/.runtimes/Ruby 25 x
</code></pre>
<h2 id="heading-install-aws-cli">Install AWS CLI</h2>
<ul>
<li><p>Note: if AWS CLI is version 1 then go <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">here</a> to install AWS CLI version 2.</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Check if AWS CLI version 2 is already installed</span>
  aws --version
  <span class="hljs-comment">## If you see "aws-cli/2.0.0" or higher then continue</span>
  <span class="hljs-comment">## If you see "zsh: command not found: aws" then download the package from below --&gt; Download [AWS CLI version 2](https://awscli.amazonaws.com/AWSCLIV2.pkg)</span>
</code></pre>
</li>
<li><p>Set Up AWS CLI Config File: AWS CLI v2 utilizes a config file to store frequently used configurations and credentials.</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Make sure ~/.aws exists</span>
  ls ~/.aws
  <span class="hljs-comment">## If you see an error saying no such file or directory then create it</span>
  mkdir ~/.aws
  <span class="hljs-comment">## Edit the config file</span>
  nano ~/.aws/config
</code></pre>
</li>
<li><p>Add the following to your AWS CLI config file.</p>
<pre><code class="lang-plaintext">  [default]
  output=json
  region=ap-southeast-2

  ## (Optional) Add a named profile - boto 3 has issues assuming named profiles
  [profile oceansoft]
  output=json
  region=ap-southeast-2
</code></pre>
</li>
<li><p>To exit Nano, press Control+X, “Y” to accept changes, and then Return to save the file at <code>/Users/&lt;ALIAS&gt;/.aws/config</code>. After, enter the following in the Terminal.</p>
<pre><code class="lang-sh">  <span class="hljs-comment">## Check AWS CLI is working well</span>
  aws s3 ls
  <span class="hljs-comment">## (Optional) Check that your named profile works</span>
  aws s3 ls --profile oceansoft
  <span class="hljs-comment">## If successful, you should see a list of your S3 buckets and AWS CLI is successfully using temporary credentials</span>
</code></pre>
</li>
</ul>
<h2 id="heading-install-rstudio">Install RStudio</h2>
<ul>
<li><p><strong>✅</strong> Install <strong>R</strong>: [R for macOS](<a target="_blank" href="https://cran.rstudio.com/">https://cran.rstudio.com/</a>)</p>
</li>
<li><p><strong>✅</strong> Install <strong>RStudio</strong>: [RStudio Desktop](<a target="_blank" href="https://posit.co/download/rstudio-desktop/">https://posit.co/download/rstudio-desktop/</a>)</p>
</li>
</ul>
<h2 id="heading-finished-amp-post-setup">Finished &amp; Post Setup</h2>
<p>If you followed along with this guide you should now have a working MacOS environment set up for development work. The Post Setup steps following this guide are needed every time you set up a new workspace for development work.</p>
<p>\&gt; ✅ You may need to restart your computer after installing the above packages.</p>
]]></content:encoded></item></channel></rss>