<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[byteQualia]]></title><description><![CDATA[contemporary. computing. concepts.]]></description><link>https://blog.bytequalia.com/</link><generator>Ghost 4.35</generator><lastBuildDate>Fri, 04 Apr 2025 04:24:04 GMT</lastBuildDate><atom:link href="https://blog.bytequalia.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Implementing application load balancing for ECS Anywhere workloads]]></title><description><![CDATA[In this post we demonstrate how to deploy and load balance a web application across multiple Amazon ECS Anywhere external instances, using the Traefik Proxy (Traefik) network load balancer.]]></description><link>https://blog.bytequalia.com/amazon-ecs-anywhere-nlb-traefik/</link><guid isPermaLink="false">64a912ff3398fc00012d0c13</guid><category><![CDATA[Amazon ECS]]></category><category><![CDATA[Cloud Native]]></category><category><![CDATA[Traefik]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Wed, 22 Nov 2023 01:39:32 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2023/11/igor-omilaev-6-Y_Hxoh7VU-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<blockquote><em>This post demonstrates how to deploy and load balance a web application across multiple Amazon ECS Anywhere external instances, using the <a href="https://traefik.io/traefik/">Traefik Proxy</a> (Traefik) network load balancer.</em></blockquote><h2 id="introduction">Introduction</h2><img src="https://blog.bytequalia.com/content/images/2023/11/igor-omilaev-6-Y_Hxoh7VU-unsplash.jpg" alt="Implementing application load balancing for ECS Anywhere workloads"><p>With <a href="https://aws.amazon.com/ecs/anywhere/">Amazon ECS Anywhere</a>, you can run and manage containers on any customer-managed infrastructure using the same cloud-based, fully managed, and highly scalable container orchestration service you use in AWS today. Amazon ECS Anywhere provides support for registering an <em>external instance,</em> such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster. External instances are optimized for running applications that generate outbound traffic or process data, and can also be used to host applications that service inbound requests such as frontend web applications, microservices, or Application Programming Interfaces (APIs).</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2023/11/image.png" class="kg-image" alt="Implementing application load balancing for ECS Anywhere workloads" loading="lazy" width="1920" height="1080" srcset="https://blog.bytequalia.com/content/images/size/w600/2023/11/image.png 600w, https://blog.bytequalia.com/content/images/size/w1000/2023/11/image.png 1000w, https://blog.bytequalia.com/content/images/size/w1600/2023/11/image.png 1600w, https://blog.bytequalia.com/content/images/2023/11/image.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>When using Amazon ECS Anywhere to manage applications that service inbound requests, it&#x2019;s possible to configure an Amazon Elastic Containers Service (<a href="https://docs.aws.amazon.com/ecs/?icmpid=docs_homepage_containers">Amazon ECS</a>) service to run and maintain a specified number of instances of a task definition simultaneously across multiple Amazon ECS external instances to provide increased capacity, availability, and reliability of your applications. In this scenario, deploying a load balancer to an external instance helps simplify the distribution of inbound traffic evenly between each of the instances that host an application.</p><p>This post we demonstrate how to deploy and load balance a web application across multiple Amazon ECS Anywhere external instances, using the <a href="https://traefik.io/traefik/">Traefik Proxy</a> (Traefik) network load balancer.</p><p>Traefik is an open-source cloud-native load balancer and reverse proxy application developed by the software company Traefik Labs. Traefik supports automatic service discovery and configuration for a variety of orchestrators, including Docker, Amazon ECS, Kubernetes, and Marathon. Traefik also provides support for encrypted traffic termination, automatic security certificate issuance and renewal, and additional load balancing capabilities such as circuit breakers, and rate limiting.</p><h2 id="solution-overview">Solution overview</h2><p>In reference to the <strong>Solution overview</strong> diagram, the solution demonstrated in this post has the following key characteristics:</p><ul><li>The load balanced web application is deployed by Amazon ECS Anywhere to multiple managed external instances.</li><li>The web application is deployed as an Amazon ECS service <code>Whoami</code>, and the associated task definition simultaneously runs four instances of the containerized whoami web application distributed across two separate external instances.</li><li>The Traefik load balancer is deployed as an Amazon ECS service <code>LoadBalancer</code>, and is configured to load balance incoming HTTP requests evenly across all instances of the whoami web application using rule based traffic routing.</li><li>The solution implements the default Traefik load balancing algorithm, which is round robin with an equal weighting applied to each web application instance.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2023/07/18/Solution-Overview.png" class="kg-image" alt="Implementing application load balancing for ECS Anywhere workloads" loading="lazy"><figcaption>Diagram-1 &#x2013; Solution overview</figcaption></figure><h3 id="traefik-configuration">Traefik configuration</h3><p>In reference to the <strong>Solution routing architecture</strong> diagram, the Traefik request routing and load balancing architecture is comprised of four major components: EntryPoint, Router, Service, and Provider.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2023/07/18/Solution-Routing.png" class="kg-image" alt="Implementing application load balancing for ECS Anywhere workloads" loading="lazy"><figcaption>Diagram-2 &#x2013; Solution routing architecture</figcaption></figure><p>EntryPoints are the network entry points into the Traefik load balancer, which define the port and protocol on which to listen for incoming network packets. In the reference solution, the <code>LoadBalancer</code> Traefik instance is configured with a single HTTP EntryPoint listening on TCP port 80.</p><p>Services are responsible for configuring how to reach the actual application endpoints that handle the incoming requests. In the reference solution, we create and configure a Traefik Service <code>named lb-svc-whoami</code> that&#x2018;s configured to use the inbuilt Amazon ECS provider. Via the inbuilt Amazon ECS provider, Traefik automatically enumerates active application endpoints via the Amazon ECS control plane and update its routing rules in real-time &#x2014; dynamically load balancing incoming requests across the pool of active application instances which may expand or contract in response to factors such as fluctuating load, application failure, or updates.</p><p>Routers analyze incoming requests, using rules to decide which Service to forward any given request to. Forwarding rules provide the ability to match traffic based on request characteristics such as host, path, headers, and method. Two Routers have been implemented in the example solution, each configured to forward traffic to the Traefik Service <code>lb-svc-whoami</code>:</p><ul><li><code>whoami-host</code> is configured with a host based rule, which forwards incoming requests to the <code>lb-svc-whoami</code> service when the request domain (i.e., host header value) matches &#x201C;whoami.domain.com&#x201D;.</li><li><code>whoami-path</code> is configured with a path based rule, which forwards incoming requests to the <code>lb-svc-whoami</code> service when the HTTP request path matches &#x201C;/whoami&#x201D;.</li></ul><h3 id="solution-operation">Solution operation</h3><p>In reference to the <strong>Solution operation </strong>diagram, the solution demonstrated in this blog post has the following key operational characteristics:</p><ol><li>The Amazon ECS services <code>LoadBalancer</code> and <code>Whoami</code> are provisioned to the external instance by the Admin, who submits configuration and deployment requests to the Amazon ECS service in the AWS Region.</li><li>In communication with the Amazon ECS service, the Amazon ECS agent on the external instances launch the <code>LoadBalancer</code> and <code>Whoami</code> workloads via the local Docker API.</li><li>In communication with the Amazon ECS service, the <code>LoadBalancer</code> Traefik instance enumerates containers backing the <code>Whoami</code> service, and updates its routing table accordingly([2b], [2c]).</li><li>User-1 requests to &#x201C;http://whoami.domain.com&#x201D; match the Traefik host based route <code>whoami-host</code> and are load balanced across each of the whoami web application instances [A].</li><li>User-2 request to &#x201C;http://domain.com/whoami&#x201D; match the Traefik path based route <code>whoami-path</code> and are load balanced across each of the whoami web application instances [A].</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2023/07/18/Solution-Operation.png" class="kg-image" alt="Implementing application load balancing for ECS Anywhere workloads" loading="lazy"><figcaption>Diagram-3 &#x2013; Solution operation</figcaption></figure><h2 id="walkthrough">Walkthrough</h2><p>In this section, I&#x2019;ll guide you in the process of implementing the load balanced whoami web application to Amazon ECS Anywhere. We configure the solution in accordance with the deployment scenario detailed in the <strong>Solution overview</strong> section. The solution walkthrough proceeds in the following order of operation:</p><ol><li>Deploy the <code>LoadBalancer</code> service to external instance #1.</li><li>Deploy the <code>Whoami</code> service to external instances #2, and #3.</li><li>Demonstrate host based load balancing to the <code>Whoami</code> service.</li><li>Demonstrate path based load balancing to the <code>Whoami</code> service.</li><li>Cleaning up.</li></ol><h3 id="prerequisites">Prerequisites</h3><p>To complete this walkthrough, you will need the following prerequisites:</p><ul><li>An <a href="https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fportal.aws.amazon.com%2Fbilling%2Fsignup%2Fresume&amp;client_id=signup">AWS account</a> with necessary permissions to create the resources.</li><li>The AWS Command Line Interface (<a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">AWS CLI</a>) installed and configured.</li><li>An Amazon <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-cluster-console-v2.html">ECS Cluster</a> with three registered Amazon ECS Anywhere external instances.</li><li>For each external instance you register with an Amazon ECS cluster, it requires the AWS Systems Manager (SSM) Agent, the Amazon ECS container agent, and Docker installed. To register the external instance to an Amazon ECS cluster, it must first be registered as an AWS Systems Manager managed instance. You can generate the comprehensive installation script in a few clicks on the Amazon ECS console. Follow the instructions as described <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere-registration.html">here</a> in the Amazon ECS product documentation.</li><li>An AWS Identity and Access Management (<a href="https://docs.aws.amazon.com/iam/?icmpid=docs_homepage_security">AWS IAM</a>) role provisioned with appropriate policy to permit the Traefik proxy to read the required Amazon ECS attributes. You can follow <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html">this link</a> for instructions on creating an AWS IAM role and associated policy. The Amazon ECS Task IAM role requires the following policy configuration in order to read required Amazon ECS information.</li></ul><pre><code class="language-apacheconf">{
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
        {
            &quot;Sid&quot;: &quot;TraefikECSReadAccess&quot;,
            &quot;Effect&quot;: &quot;Allow&quot;,
            &quot;Action&quot;: [
                &quot;ecs:ListClusters&quot;,
                &quot;ecs:DescribeClusters&quot;,
                &quot;ecs:ListTasks&quot;,
                &quot;ecs:DescribeTasks&quot;,
                &quot;ecs:DescribeContainerInstances&quot;,
                &quot;ecs:DescribeTaskDefinition&quot;,
                &quot;ec2:DescribeInstances&quot;,
                &quot;ssm:DescribeInstanceInformation&quot;
            ],
            &quot;Resource&quot;: [
                &quot;*&quot;
            ]
        }
    ]
}
</code></pre><h3 id="1-assign-roles-to-external-instances-using-amazon-ecs-custom-attributes">1. Assign roles to external instances using Amazon ECS custom attributes</h3><p>You can add custom metadata to your container instances, known as <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html"><em>attributes</em></a>. Each attribute has a name and an optional string value. To assist in targeting the deployment of our Amazon ECS services to specific external instances, we assign our external instances a logical role using custom attributes. The custom attributes are used to configure task placement constraints.</p><p>One of the external instances is assigned the role of <code>loadbalancer</code>. By following this <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html">work instruction,</a> add the following custom attribute to one of your external instances:</p><ul><li>Name = <code>role</code>, Value = <code>loadbalancer</code></li></ul><p>The remaining two external instances are assigned the role of <code>webserver</code>. Add the following custom attribute to each of the remaining external instances:</p><ul><li>Name = <code>role</code>, Value = <code>webserver</code></li></ul><h3 id="2-deploy-the-loadbalancer-service-to-the-external-instance">2. Deploy the <code>LoadBalancer</code> service to the external instance</h3><p>Next we deploy the Traefik load balancer. We first create an Amazon ECS task definition, which describes the container configuration. Then we deploy the task definition to an external instance as an Amazon ECS service.</p><p>The following is an example JSON task definition, which contains the required configuration to implement the Traefik load balancer. Some points to note:</p><ul><li>The Amazon ECS launch type compatibility is set to <code>EXTERNAL</code> to ensure correct operation on an external instance.</li><li>The task definition includes the placement constraint matching the <code>loadbalancer</code> custom attribute value.</li><li>The Traefik web user interface has been enabled, and will be accessible via Transport Control Protocol (TCP) port 8080 on the external instance host IP address.</li></ul><blockquote><em>Note: Replace the string <code>&lt;TASK_ROLE_ARN&gt;</code> with the Amazon Resource Name (ARN) of the AWS IAM role configured with the <code>TraefikECSReadAccess</code> policy as configured in the prerequisites section.</em></blockquote><pre><code class="language-apacheconf">{
    &quot;family&quot;: &quot;LoadBalancer&quot;,
    &quot;cpu&quot;: &quot;256&quot;,
    &quot;memory&quot;: &quot;128&quot;,
    &quot;containerDefinitions&quot;: [
      {
        &quot;name&quot;: &quot;traefik&quot;,
        &quot;image&quot;: &quot;traefik:latest&quot;,
        &quot;entryPoint&quot;: [],
        &quot;portMappings&quot;: [
          {
            &quot;hostPort&quot;: 80,
            &quot;protocol&quot;: &quot;tcp&quot;,
            &quot;containerPort&quot;: 80
          },
          {
            &quot;hostPort&quot;: 8080,
            &quot;protocol&quot;: &quot;tcp&quot;,
            &quot;containerPort&quot;: 8080
          }
        ],
        &quot;command&quot;: [
          &quot;--api.dashboard=true&quot;,
          &quot;--api.insecure=true&quot;,
          &quot;--accesslog=true&quot;,
          &quot;--providers.ecs.ecsAnywhere=true&quot;,
          &quot;--providers.ecs.region=ap-southeast-2&quot;,
          &quot;--providers.ecs.autoDiscoverClusters=true&quot;,
          &quot;--providers.ecs.exposedByDefault=true&quot;
        ]
      }
    ],
    &quot;placementConstraints&quot;: [
      {
        &quot;type&quot;: &quot;memberOf&quot;,
        &quot;expression&quot;: &quot;attribute:role == loadbalancer&quot;
      }
    ],
  &quot;taskRoleArn&quot;: &lt;TASK_ROLE_ARN&gt;,
  &quot;requiresCompatibilities&quot;: [
    &quot;EXTERNAL&quot;
  ]
}
</code></pre><p>Copy the example task definition JSON content to a file named &#x201C;task-definition-LoadBalancer.json&#x201D;, and register the task definition with your cluster using the following AWS CLI command:</p><pre><code class="language-apacheconf">aws ecs register-task-definition --cli-input-json file://task-definition-LoadBalancer.json</code></pre><p>Next, create the <code>LoadBalancer</code> service using the <code>LoadBalancer</code> task definition by running the following AWS CLI command.</p><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the target Amazon ECS cluster name. This should be the cluster with which the external instance is registered.</em></blockquote><pre><code class="language-apacheconf">aws ecs create-service \
    --cluster &lt;CLUSTER_NAME&gt; \
    --service-name LoadBalancer \
    --task-definition LoadBalancer:1 \
    --desired-count 1 \
    --launch-type EXTERNAL
</code></pre><p>You can use the following AWS CLI command to validate that the service has deployed correctly.</p><pre><code class="language-apacheconf">aws ecs describe-services &#x2014;cluster &lt;CLUSTER_NAME&gt; &#x2014;services LoadBalancer</code></pre><p>The Traefik load balancer is now running on the external instance. You can access the Traefik web user interface by browsing to the following URL.</p><blockquote><em>Note: Replace the string <code>&lt;HOST_IP&gt;</code> with the target Amazon ECS external instance host IP address, or DNS hostname.</em></blockquote><pre><code class="language-apacheconf">http://&lt;HOST_IP&gt;:8080/dashboard/</code></pre><h3 id="3-deploy-the-whoami-service-to-the-webserver-external-instances">3. Deploy the <code>Whoami</code> service to the <code>webserver</code> external instances</h3><p>Next we deploy the example web application. First we create an Amazon ECS task definition, which describes the web application container configuration. Then we&#x2019;ll deploy the task definition to our external instances as an Amazon ECS service.</p><p>The following is an example JSON task definition, containing the required configuration to implement the whoami web application. Some points to note:</p><ul><li>The Amazon ECS launch type compatibility is set to <code>EXTERNAL</code> to ensure correct operation on an external instance.</li><li>The task definition includes the placement constraint matching the <code>webserver</code> custom attribute value.</li><li>Docker label <code>traefik.http.routers</code> is used to configure host and path based routing rules.</li><li>As the whoami container exposes the single TCP port 80, Docker label <code>traefik.http.services.whoami</code> is used to configure this port for private communication with the Traefik load balancer.</li></ul><pre><code class="language-apacheconf">{
  &quot;family&quot;: &quot;Whoami&quot;,
  &quot;cpu&quot;: &quot;256&quot;,
  &quot;memory&quot;: &quot;128&quot;,
  &quot;containerDefinitions&quot;: [
    {
      &quot;name&quot;: &quot;whoami&quot;,
      &quot;image&quot;: &quot;traefik/whoami:latest&quot;,
      &quot;entryPoint&quot;: [],
      &quot;portMappings&quot;: [
        {
          &quot;hostPort&quot;: 0,
          &quot;protocol&quot;: &quot;tcp&quot;,
          &quot;containerPort&quot;: 80
        }
      ],
      &quot;command&quot;: [],
      &quot;dockerLabels&quot;: {
        &quot;traefik.http.services.whoami.loadbalancer.server.port&quot;: &quot;80&quot;,
        &quot;traefik.http.routers.whoami-host.rule&quot;: &quot;Host(`whoami.domain.com`)&quot;,
        &quot;traefik.http.routers.whoami-path.rule&quot;: &quot;Path(`/whoami`)&quot;
      }
    }
  ],
  &quot;placementConstraints&quot;: [
    {
      &quot;type&quot;: &quot;memberOf&quot;,
      &quot;expression&quot;: &quot;attribute:role == webserver&quot;
    }
  ],
  &quot;volumes&quot;: [],
  &quot;requiresCompatibilities&quot;: [
    &quot;EXTERNAL&quot;
  ]
}
</code></pre><p>Copy the example task definition JSON content to a file named &#x201C;task-definition-Whoami.json&#x201D;, and register the task definition with your cluster using the following AWS CLI command:</p><pre><code class="language-apacheconf">aws ecs register-task-definition --cli-input-json file://task-definition-Whoami.json</code></pre><p>Then create the <code>Whoami</code> service using the <code>Whoami</code> task definition by running the following AWS CLI command.</p><blockquote><em>Note: The directive <code>--desired count 4</code> instructs the Amazon ECS service scheduler to schedule and maintain 4 running instances of the whoami task spread evenly across the two <code>webserver</code> external instances.</em></blockquote><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the desired Amazon ECS cluster name. This should be the cluster with which the external instance is registered.</em></blockquote><pre><code class="language-apacheconf">aws ecs create-service \
    --cluster &lt;CLUSTER_NAME&gt; \
    --service-name Whoami\
    --task-definition Whoami:1 \
    --desired-count 4 \
    --launch-type EXTERNAL
</code></pre><p>You can use the following AWS CLI command to validate that the service has deployed correctly.</p><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the target Amazon ECS cluster name.</em></blockquote><pre><code class="language-apacheconf">aws ecs describe-services &#x2014;cluster &lt;CLUSTER_NAME&gt; &#x2014;services Whoami</code></pre><p>The whoami web application is now running on the external instances. You can use the Traefik web user interface to inspect the solution configuration. To do this, first browse to the following address in order to inspect the <code>whoami</code> service configuration.</p><blockquote><em>Note: Replace the string <code>&lt;HOST_IP&gt;</code> with the target Amazon ECS external instance host IP address, or DNS hostname.</em></blockquote><pre><code class="language-apacheconf">http://&lt;HOST_IP&gt;:8080/dashboard/#/http/services/whoami@ecs</code></pre><p>Per the following illustration, the Traefik web user interface provides an overview of the major components comprising the <code>whoami</code> service.</p><ul><li>In the [1] <em>Service Details</em> section we see the service type as <em>loadbalancer</em>, using the <em>Amazon ECS provider.</em></li><li>In the [2] <em>Servers</em> section are the x4 whoami web application instances distributed equally across the two <code>webserver</code> external instances, each listening on TCP port 80.</li><li>In the [3] <em>Routers</em> section the two Traefik routers <code>whoami-host</code> and <code>whoami-path</code> are listed with their respective host and path based routing rules, and association with the Service <code>whoami</code>.</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2023/07/18/Traefik-Web-Interface.png" class="kg-image" alt="Implementing application load balancing for ECS Anywhere workloads" loading="lazy"><figcaption>Diagram-4 &#x2013; Traefik web interface</figcaption></figure><p>With the solution now in place, let&#x2019;s now validate the traffic routing behavior by sending HTTP requests to the whoami web application using both the path and host based routing rules.</p><h3 id="4-demonstrate-host-based-load-balancing-to-the-whoami-service">4. Demonstrate host based load balancing to the <code>Whoami</code> service</h3><p>Using a terminal emulator, let&#x2019;s send x4 HTTP requests to the host &#x201C;whoami.domain.com&#x201D;.</p><blockquote><em>Note: For name resolution to function in the test scenario, update the local <a href="https://en.wikipedia.org/wiki/Hosts_%28file%29">hosts</a> file on the test client machine or a DNS A record for host &#x201C;whoami.domain.com&#x201D; as associated with the LAN IP address of the <code>loadbalancer</code> external instance. If required this can also be exposed to the public internet via your preferred firewall/gateway solution and publicly resolvable DNS zone.</em></blockquote><blockquote><em>Note: HTTP response data in the below examples has been truncated for brevity.</em></blockquote><pre><code class="language-apacheconf">$ curl whoami.domain.com
Hostname: c87e8f82af9f
IP: 127.0.0.1
IP: 172.17.0.2
RemoteAddr: 192.168.1.115:42752
GET / HTTP/1.1
Host: whoami.domain.com

$ curl whoami.domain.com
Hostname: 618e18ed985a
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 192.168.1.115:34496
GET / HTTP/1.1
Host: whoami.domain.com

$ curl whoami.domain.com
Hostname: 0fb436d99af7
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 192.168.1.115:55168
GET / HTTP/1.1
Host: whoami.domain.com

$ curl whoami.domain.com
Hostname: ab91f6ec8bcc
IP: 127.0.0.1
IP: 172.17.0.2
RemoteAddr: 192.168.1.115:44678
GET / HTTP/1.1
Host: whoami.domain.com
</code></pre><p>The <code>Hostname</code> field in the curl response represents the container ID of the load balanced container responding to the HTTP request. We can see in the example output that the Traefik load balancer has forwarded each of the four HTTP requests evenly across each of the four whoami web application containers c87e8f82af9f, 618e18ed985a, 0fb436d99af7, and ab91f6ec8bcc by using the default round robin algorithm.</p><h3 id="5-demonstrate-path-based-load-balancing-to-the-whoami-service">5. Demonstrate path based load balancing to the <code>Whoami</code> service</h3><p>Next, using a terminal emulator send four HTTP requests to the resource <code>&lt;HOST_IP&gt;/whoami</code>.</p><blockquote><em>Note: Replace the string <code>&lt;HOST_IP&gt;</code> with the target Amazon ECS external instance host IP address.</em></blockquote><pre><code class="language-apacheconf">$ curl &lt;HOST_IP&gt;/whoami
Hostname: c87e8f82af9f
IP: 127.0.0.1
IP: 172.17.0.2
RemoteAddr: 192.168.1.115:43674
GET /whoami HTTP/1.1
Host: 192.168.1.115

$ curl &lt;HOST_IP&gt;/whoami
Hostname: 618e18ed985a
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 192.168.1.115:35418
GET /whoami HTTP/1.1
Host: 192.168.1.115

$ curl &lt;HOST_IP&gt;/whoami
Hostname: 0fb436d99af7
IP: 127.0.0.1
IP: 172.17.0.3
RemoteAddr: 192.168.1.115:56090
GET /whoami HTTP/1.1
Host: 192.168.1.115

$ curl &lt;HOST_IP&gt;/whoami
Hostname: ab91f6ec8bcc
IP: 127.0.0.1
IP: 172.17.0.2
RemoteAddr: 192.168.1.115:45552
GET /whoami HTTP/1.1
Host: 192.168.1.115
</code></pre><p>And again in accordance with our solution configuration, the Traefik load balancer has forwarded each of the four path-based HTTP requests across each of the individual whoami web application containers c87e8f82af9f, 618e18ed985a, 0fb436d99af7, and ab91f6ec8bcc by using the round robin algorithm.</p><h2 id="cleaning-up">Cleaning up</h2><p>In order to avoid incurring future charges, follow these procedures to delete the resources provisioned during the work instruction.</p><p>First, let&#x2019;s delete the Amazon ECS services. To do so, run the following AWS CLI commands.</p><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the desired Amazon ECS cluster name. This should be the cluster with which the external instance is registered.</em></blockquote><pre><code class="language-apacheconf">aws ecs delete-service --cluster &lt;CLUSTER_NAME&gt; --service Whoami --force
aws ecs delete-service --cluster &lt;CLUSTER_NAME&gt; --service LoadBalancer --force
</code></pre><p>Next, we deregister the external instances from the Amazon ECS cluster. To do so, run the following AWS CLI command for each external instance.</p><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the desired Amazon ECS cluster name. This should be the cluster with which the external instance is registered. Replace the string <code>&lt;INSTANCE_NAME&gt;</code> with the name of the external instance to be deregistered.</em></blockquote><pre><code class="language-apacheconf">aws ecs deregister-container-instance \
    --cluster &lt;CLUSTER_NAME&gt; \
    --container-instance &lt;INSTANCE_NAME&gt;
    --force
</code></pre><p>To delete the Amazon ECS cluster, run the following AWS CLI command.</p><blockquote><em>Note: Replace the string <code>&lt;CLUSTER_NAME&gt;</code> with the desired Amazon ECS cluster name.</em></blockquote><pre><code class="language-apacheconf">aws ecs delete-cluster --cluster &lt;CLUSTER_NAME&gt;</code></pre><p>Finally, follow this <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_delete.html">work instruction</a> to delete the AWS IAM role and associated policy configured to permit the Traefik proxy to read required Amazon ECS attributes.</p><h2 id="conclusion">Conclusion</h2><p>In this post, we showed you how to deploy and load balance a web application across multiple Amazon ECS Anywhere external instances, using the <a href="https://traefik.io/traefik/">Traefik Proxy</a> (Traefik) network load balancer. Implementing network load balancing for Amazon ECS Anywhere workloads such as frontend web applications, microservices, or APIs, is straightforward and effective when using the open source Traefik network load balancer. By incorporating the built-in Traefik ECS provider, you are able to dynamically discover and distribute requests across pools of active application instances as they expand or contract in response to factors such as fluctuating load, application failure, or updates. Additional support for encrypted traffic termination, automatic security certificate issuance and renewal, and additional load balancing capabilities such as circuit breakers, and rate limiting offer a comprehensive toolkit for securing and scaling your distributed workloads with Amazon ECS Anywhere.</p><p>To learn more, see Amazon <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html">ECS Anywhere</a> in the Amazon ECS Developer Guide, and we encourage you to give it a try with the Amazon <a href="https://ecsworkshop.com/ecsanywhere/">ECS Anywhere workshop</a> as a next step.</p><p>Photo by <a href="https://unsplash.com/@omilaev?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Igor Omilaev</a> on <a href="https://unsplash.com/photos/a-pile-of-colorful-cassette-tapes-sitting-on-top-of-each-other-6-Y_Hxoh7VU?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller]]></title><description><![CDATA[The AWS Cloud Map MCS Controller for Kubernetes is an open source project that implements the upstream multi-cluster services API (mcs-api) specification. Learn all about the mcs-api, and how to deploy the AWS MCS Controller in support of seamless, multi-cluster workload deployments on Amazon EKS.]]></description><link>https://blog.bytequalia.com/kubernetes-multi-cluster-service-discovery-using-the-aws-cloud-map-mcs-controller/</link><guid isPermaLink="false">61d39ba69e996e0001a3e6f5</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Amazon EKS]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Thu, 13 Jan 2022 05:50:57 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2022/01/susan-wilkinson-ks7q3diIJUw-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<blockquote><em>The <a href="https://gitlab.com/byteQualia/cloud-map-mcs-controller">cloud-map-mcs-controller</a> git repository provides a detailed work instruction and associated artefacts required for the end-end implementation of the AWS Cloud Map MCS Controller.</em></blockquote><h2 id="introduction">Introduction</h2><img src="https://blog.bytequalia.com/content/images/2022/01/susan-wilkinson-ks7q3diIJUw-unsplash-1.jpg" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller"><p>Kubernetes, with it&apos;s implementation of the cluster construct has simplified the ability to schedule workloads across a collection of VMs or nodes. Declarative configuration, immutability, auto-scaling, and self healing have vastly simplified the paradigm of workload management within the cluster - which has enabled teams to move at increasing velocities.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2022/01/susan-wilkinson-ks7q3diIJUw-unsplash.jpg" class="kg-image" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller" loading="lazy"></figure><p>As the rate of Kubernetes adoption continues to increase, there has been a corresponding increase in the number of use cases that require workloads to break through the perimeter of the single cluster construct. Requirements concerning workload location/proximity, isolation, and reliability have been the primary catalyst for the emergence of deployment scenarios where a single logical workload will span multiple Kubernetes clusters:</p><ul><li><strong>Location</strong> based concerns include network latency requirements (e.g. bringing the application as close to users as possible), data gravity requirements (e.g. bringing elements of the application as close to fixed data sources as possible), and jurisdiction based requirements (e.g. data residency limitations imposed via governing bodies);</li><li><strong>Isolation</strong> based concerns include performance (e.g. reduction in &quot;noisy-neighbor&quot; influence in mixed workload clusters), environmental (e.g. by staged or sandboxed workload constructs such as &quot;dev&quot;, &quot;test&quot;, and &quot;prod&quot; environments), security (e.g. separating untrusted code or sensitive data), organisational (e.g. teams fall under different business units or management domains), and cost based (e.g. teams are subject to separate budgetary constraints);</li><li><strong>Reliability</strong> based concerns include blast radius and infrastructure diversity (e.g. preventing an application based or underlying infrastructure issue in one cluster or provider zone from impacting the entire solution), and scale based (e.g. the workload may outgrow a single cluster)</li></ul><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2022/10/solution-overview-v0.02.png" class="kg-image" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller" loading="lazy" width="1280" height="720" srcset="https://blog.bytequalia.com/content/images/size/w600/2022/10/solution-overview-v0.02.png 600w, https://blog.bytequalia.com/content/images/size/w1000/2022/10/solution-overview-v0.02.png 1000w, https://blog.bytequalia.com/content/images/2022/10/solution-overview-v0.02.png 1280w" sizes="(min-width: 720px) 720px"><figcaption>Solution Overview: Example Deployment Scenario</figcaption></figure><p>Multi-cluster application architectures tend to be designed to either be <strong>replicated</strong> in nature - with this pattern each participating cluster runs a full copy of each given application; or alternatively they implement more of a <strong>group-by-service</strong> pattern where the services of a single application or system are split or divided amongst multiple clusters.</p><p>When it comes to the configuration of Kubernetes (and the surrounding infrastructure) to support a given multi-cluster application architecture - the space has evolved over time to include a number of approaches. Implementations tend draw upon a combination of components at various levels of the stack, and generally speaking they also vary in terms of the &quot;weight&quot; or complexity of the implementation, number and scope of features offered, as well as the associated management overhead. In simple terms these approaches can be loosely grouped into two main categories:</p><!--kg-card-begin: markdown--><ul>
<li><strong>Network-centric</strong> approaches focus on network interconnection tooling to implement connectivity between clusters in order to facilitate cross-cluster application communication. The various network-centric approaches include those that are tightly coupled with the CNI (e.g. Cillium Mesh), as well as more CNI agnostic implementations such as Submariner and Skupper. Service mesh implementations also fall into the network-centric category, and these include Istio&#x2019;s multi-cluster support, Linkerd service mirroring, Kuma from Kong, AWS App Mesh, and Consul&#x2019;s mesh gateway. There are also various multi-cluster ingress approaches, as well as virtual-kubelet based approaches including Admiralty, Tensile-kube, and Liqo.</li>
<li><strong>Kubernetes-centric</strong> approaches focus on supporting and extending the core Kubernetes primitives in order to support multi-cluster use cases. These approaches fall under the stewardship of the Kubernetes <a href="https://github.com/kubernetes/community/tree/master/sig-multicluster">Multicluster Special Interest Group</a> whose charter is focused on designing, implementing, and maintaining API&#x2019;s, tools, and documentation related to multi-cluster administration and application management. Subprojects include:
<ul>
<li><strong><a href="https://github.com/kubernetes-sigs/kubefed">kubefed</a></strong> (Kubernetes Cluster Federation) which implements a mechanism to coordinate the configuration of multiple Kubernetes clusters from a single set of APIs in a hosting cluster. kubefed is considered to be foundational for more complex multi-cluster use cases such as deploying multi-geo applications, and disaster recovery.</li>
<li><strong><a href="https://github.com/kubernetes-sigs/work-api">work-api</a></strong> (Multi-Cluster Works API) aims to group a set of Kubernetes API resources to be applied to one or multiple clusters together as a concept of &#x201C;work&#x201D; or &#x201C;workload&#x201D; for the purpose of multi-cluster workload lifecycle mangement.</li>
<li><strong><a href="https://github.com/kubernetes-sigs/mcs-api">mcs-api</a></strong> (Multi-cluster Services APIs) implements an API specification to extend the single-cluster bounded Kubernetes service concept to function across multiple clusters.</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h3 id="about-the-multi-cluster-services-api">About the Multi-cluster Services API</h3><p>Kubernetes&apos; familiar <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service">Service</a> object lets you discover and access services within the boundary of a single Kubernetes cluster. The mcs-api implements a Kubernetes-native extension to the Service API, extending the scope of the service resource concept beyond the cluster boundary - providing a mechanism to stitch your multiple clusters together using standard (and familiar) DNS based service discovery.</p><blockquote><em><a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api#kep-1645-multi-cluster-services-api">KEP-1645: Multi-Cluster Services API</a> provides the formal description of the Multi Cluster Service API. KEP-1645 doesn&apos;t define a complete implementation - it serves to define how an implementation should behave.<br>At the time of writing the mcs-api version is: <code>multicluster.k8s.io/v1alpha1</code></em></blockquote><p>The primary deployment scenarios covered by the mcs-api include:</p><ul><li><strong>Different services each deployed to separate clusters:</strong> I have 2 clusters, each running different services managed by different teams, where services from one team depend on services from the other team. I want to ensure that a service from one team can discover a service from the other team (via DNS resolving to VIP), regardless of the cluster that they reside in. In addition, I want to make sure that if the dependent service is migrated to another cluster, the dependee is not impacted.</li><li><strong>Single service deployed to multiple clusters:</strong> I have deployed my stateless service to multiple clusters for redundancy or scale. Now I want to propagate topologically-aware service endpoints (local, regional, global) to all clusters, so that other services in my clusters can access instances of this service in priority order based on availability and locality.</li></ul><p>The mcs-api is able to support these use cases through the described properties of a <code>ClusterSet</code>, which is a group of clusters with a high degree of mutual trust and shared ownership that share services amongst themselves - along with two additional API objects: the <code>ServiceExport</code> and the <code>ServiceImport</code>.</p><p>Services are not visible to other clusters in the <code>ClusterSet</code> by default, they must be explicitly marked for export by the user. Creating a <code>ServiceExport</code> object for a given service specifies that the service should be exposed across all clusters in the <code>ClusterSet</code>. The mcs-api implementation (typically a controller) will automatically generate a corresponding <code>ServiceImport</code> object (which serves as the in-cluster representation of a multi-cluster service) in each importing cluster - for consumer workloads to be able to locate and consume the exported service.</p><p>DNS-based service discovery for <code>ServiceImport</code> objects is facilitated by the <a href="https://github.com/kubernetes/enhancements/pull/2577">Kubernetes DNS-Based Multicluster Service Discovery Specification</a> which extends the standard Kubernetes DNS paradigms by implementing records named by service and namespace for <code>ServiceImport</code> objects, but as differentiated from regular in-cluster DNS service names by using the special zone <code>.cluster<strong><strong>set</strong></strong>.local</code>. I.e. When a <code>ServiceExport</code> is created, this will cause a FQDN for the multi-cluster service to become available from within the <code>ClusterSet</code>. The domain name will be of the format <code>&lt;service&gt;.&lt;ns&gt;.svc.clusterset.local</code>.</p><h4 id="aws-cloud-map-mcs-controller-for-kubernetes">AWS Cloud Map MCS Controller for Kubernetes</h4><p>The <a href="https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s">AWS Cloud Map MCS Controller for Kubernetes</a> (MCS-Controller) is an open source project that implements the multi-cluster services API specification. </p><p>The MCS-Controller is a controller that syncs services across clusters and makes them available for multi-cluster service discovery and connectivity. The implementation model is decentralsised, and utilises AWS Cloud Map as a registry for management and distribution of multi-cluster service data across clusters.</p><p>At the time of writing, the MCS-Controller release version is <a href="https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/releases/tag/v0.3.0">v0.3.0</a> which introduces new features including the <a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/2149-clusterid#-crd">ClusterProperty CRD</a>, and support for headless services. Milestones are currently in place to bring the project up to <a href="https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/milestones">v1.0 (GA)</a>, which will include full compliance with the mcs-api specification, support for multiple AWS accounts, and Cloud Map client-side traffic shaping.</p><h4 id="aws-cloud-map">AWS Cloud Map</h4><p><a href="https://aws.amazon.com/cloud-map">AWS Cloud Map</a> is a cloud resource discovery service that allows applications to discover web-based services via the AWS SDK, API calls, or DNS queries. Cloud Map is a fully managed service which eliminates the need to set up, update, and manage your own service discovery tools and software.</p><h2 id="tutorial">Tutorial</h2><h3 id="overview">Overview</h3><p>Let&apos;s consider a deployment scenario where we provision a Service into a single EKS cluster, then make the service available from within a second EKS cluster using the AWS Cloud Map MCS Controller.</p><blockquote><em>This tutorial will take you through the end-end implementation of the solution as outlined herein, including a functional implementation of the AWS Cloud Map MCS Controller across x2 EKS clusters situated in separate VPCs.</em></blockquote><blockquote><em>The <a href="https://gitlab.com/byteQualia/cloud-map-mcs-controller">cloud-map-mcs-controller</a> git repository provides a detailed work instruction and associated artefacts required for the end-end implementation of the AWS Cloud Map MCS Controller.</em></blockquote><h3 id="solution-baseline">Solution Baseline</h3><p>The Solution Baseline environment implements each of the prerequisites required in order to provision multi-cluster services.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2022/10/solution-baseline-v0.02.png" class="kg-image" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller" loading="lazy" width="1280" height="720" srcset="https://blog.bytequalia.com/content/images/size/w600/2022/10/solution-baseline-v0.02.png 600w, https://blog.bytequalia.com/content/images/size/w1000/2022/10/solution-baseline-v0.02.png 1000w, https://blog.bytequalia.com/content/images/2022/10/solution-baseline-v0.02.png 1280w" sizes="(min-width: 720px) 720px"><figcaption>Example Deployment Scenario: Solution Baseline</figcaption></figure><p>In reference to the <strong>Solution Baseline</strong> diagram:</p><!--kg-card-begin: markdown--><ul>
<li>We have x2 EKS clusters (Cluster 1 &amp; Cluster 2), each deployed into separate VPCs within a single AWS region.
<ul>
<li>Cluster 1 VPC CIDR: 10.10.0.0/16, Kubernetes service IPv4 CIDR: 172.20.0.0/16</li>
<li>Cluster 2 VPC CIDR: 10.12.0.0/16, Kubernetes service IPv4 CIDR: 172.20.0.0/16</li>
</ul>
</li>
<li>VPC peering is configured to permit network connectivity between workloads within each cluster.</li>
<li>The CoreDNS multicluster plugin is deployed to each cluster.</li>
<li>The AWS Cloud Map MCS Controller for Kubernetes is deployed to each cluster.</li>
<li>Clusters 1 &amp; 2 are each configured as members of the same mcs-api <code>ClusterSet</code>.
<ul>
<li>Cluster 1 mcs-api <code>ClusterSet</code>: clusterset1, <code>Cluster</code> Id: cls1.</li>
<li>Cluster 2 mcs-api <code>ClusterSet</code>: clusterset1, <code>Cluster</code> Id: cls2.</li>
</ul>
</li>
<li>Clusters 1 &amp; 2 are both provisioned with the namespace <code>demo</code>.</li>
<li>Cluster 1 has a <code>ClusterIP</code> Service <code>nginx-hello</code> deployed to the <code>demo</code> namespace which frontends a x3 replica Nginx deployment <code>nginx-demo</code>.
<ul>
<li>Service | nginx-hello: 172.20.150.33:80</li>
<li>Endpoints | nginx-hello: 10.10.66.181:80,10.10.78.125:80,10.10.86.76:80</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h3 id="service-provisioning">Service Provisioning</h3><p>With the required dependencies in place, the admin user is able to create a <code>ServiceExport</code> object in Cluster 1 for the <code>nginx-hello</code> Service, such that the MCS-Controller implementation will automatically provision a corresponding <code>ServiceImport</code> in Cluster 2 for consumer workloads to be able to locate and consume the exported service.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2022/10/service-provisioning-v0.02.png" class="kg-image" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller" loading="lazy" width="1280" height="720" srcset="https://blog.bytequalia.com/content/images/size/w600/2022/10/service-provisioning-v0.02.png 600w, https://blog.bytequalia.com/content/images/size/w1000/2022/10/service-provisioning-v0.02.png 1000w, https://blog.bytequalia.com/content/images/2022/10/service-provisioning-v0.02.png 1280w" sizes="(min-width: 720px) 720px"><figcaption>Example Deployment Scenario: Service Provisioning</figcaption></figure><p>In reference to the <strong>Service Provisioning</strong> diagram:</p><ol><li>The administrator submits the request to the Cluster 1 Kube API server for a <code>ServiceExport</code> object to be created for <code>ClusterIP</code> Service <code>nginx-hello</code> in the <code>demo</code> Namespace.</li><li>The MCS-Controller in Cluster 1, watching for <code>ServiceExport</code> object creation provisions a corresponding <code>nginx-hello</code> service in the Cloud Map <code>demo</code> namespace. The Cloud Map service is provisioned with sufficient detail for the Service object and corresponding Endpoint Slice to be provisioned within additional clusters in the <code>ClusterSet</code>.</li><li>The MCS-Controller in Cluster 2 responds to the creation of the <code>nginx-hello</code> Cloud Map Service by provisioning the <code>ServiceImport</code> object and corresponding <code>EndpointSlice</code> objects via the Kube API Server.</li><li>The CoreDNS multicluster plugin, watching for <code>ServiceImport</code> and <code>EndpointSlice</code> creation provisions corresponding DNS records within the <code>.clusterset.local</code> zone.</li></ol><h3 id="service-consumption">Service Consumption</h3><p>Here we deploy a client application, and consume the example multi-cluster service using native Kubernetes Service Discovery.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2022/10/service-consumption-v0.02.png" class="kg-image" alt="Kubernetes Multi-cluster Service Discovery using the AWS Cloud Map MCS Controller" loading="lazy" width="1280" height="720" srcset="https://blog.bytequalia.com/content/images/size/w600/2022/10/service-consumption-v0.02.png 600w, https://blog.bytequalia.com/content/images/size/w1000/2022/10/service-consumption-v0.02.png 1000w, https://blog.bytequalia.com/content/images/2022/10/service-consumption-v0.02.png 1280w" sizes="(min-width: 720px) 720px"><figcaption>Example Deployment Scenario: Multi-Cluster Service Consumption</figcaption></figure><p>In reference to the <strong>Service Consumption</strong> diagram:</p><ol><li>The <code>client-hello</code> pod in Cluster 2 needs to consume the <code>nginx-hello</code> service, for which all Endpoints are deployed in Cluster 1. The <code>client-hello</code> pod requests the resource <a href="http://nginx-hello.demo.svc.clusterset.local" rel="nofollow noreferrer noopener">http://nginx-hello.demo.svc.clusterset.local:80</a>. DNS based service discovery [1b] responds with the IP address of the local <code>nginx-hello</code> <code>ServiceExport</code> Service <code>ClusterSetIP</code>.</li><li>Requests to the local <code>ClusterSetIP</code> at <code>nginx-hello.demo.svc.clusterset.local</code> are proxied to the Endpoints located on Cluster 1.</li></ol><blockquote><em>Note: In accordance with the mcs-api specification, a multi-cluster service will be imported by all clusters in which the service&apos;s namespace exists, meaning that each exporting cluster will also import the corresponding multi-cluster service. As such, the <code>nginx-hello</code> service will also be accessible via <code>ServiceExport</code> Service <code>ClusterSetIP</code> on Cluster 1. Identical to Cluster 2, the <code>ServiceExport</code> Service is resolvable by name at <code>nginx-hello.demo.svc.clusterset.local</code>.</em></blockquote><h3 id="implementation">Implementation</h3><h3 id="solution-baseline-1">Solution Baseline</h3><p>To prepare your environment to match the Solution Baseline deployment scenario, the following prerequisites should be addressed.</p><h4 id="clone-the-cloud-map-mcs-controller-git-repository">Clone the <code>cloud-map-mcs-controller</code> git repository</h4><p>Sample configuration files will be used through the course of the tutorial, which have been made available in the <code>cloud-map-mcs-controller</code> repository.</p><p>Clone the repository to the host from which you will be bootstrapping the clusters:</p><pre><code class="language-bash">git clone https://gitlab.com/byteQualia/cloud-map-mcs-controller.git</code></pre><p></p><blockquote><em>Note: All commands as provided should be run from the root directory of the cloned git repository.</em></blockquote><blockquote><em>Note: Certain values located within the provided configuration files have been configured for substitution with OS environment variables. Work instructions below will identify which environment variables should be set before issuing any commands which will depend on variable substitution.</em></blockquote><h4 id="create-eks-clusters">Create EKS Clusters</h4><p>x2 EKS clusters should be provisioned, each deployed into separate VPCs within a single AWS region.</p><ul><li>VPCs and clusters should be provisioned with non-overlapping CIDRs.</li><li>For compatibility with the remainder of the tutorial, it is recommended that <code>eksctl</code> be used to provision the clusters and associated security configuration. <em>By default, the <code>eksctl create cluster</code> command will create a dedicated VPC.</em></li></ul><p>Sample <code>eksctl</code> config file <code>/config/eksctl-cluster.yaml</code> has been provided:</p><ul><li>Environment variables AWS_REGION, CLUSTER_NAME, NODEGROUP_NAME, and VPC_CIDR should be configured. Example values have been provided in the below command reference - substitute values to suit your preference.</li><li>Example VPC CIDRs match the values provided in the Baseline Configuration description.</li></ul><p>Run the following commands to create clusters using <code>eksctl</code>.</p><p>Cluster 1:</p><pre><code class="language-bash">export AWS_REGION=ap-southeast-2
export CLUSTER_NAME=cls1
export NODEGROUP_NAME=cls1-nodegroup1
export VPC_CIDR=10.10.0.0/16
envsubst &lt; config/eksctl-cluster.yaml | eksctl create cluster -f -</code></pre><p></p><p>Cluster 2:</p><pre><code class="language-bash">export AWS_REGION=ap-southeast-2
export CLUSTER_NAME=cls2
export NODEGROUP_NAME=cls2-nodegroup1
export VPC_CIDR=10.12.0.0/16
envsubst &lt; config/eksctl-cluster.yaml | eksctl create cluster -f -</code></pre><p></p><h4 id="create-vpc-peering-connection">Create VPC Peering Connection</h4><p>VPC peering is required to permit network connectivity between workloads provisioned within each cluster.</p><p>To create the VPC Peering connection, follow the instruction <a href="https://docs.aws.amazon.com/vpc/latest/peering/create-vpc-peering-connection.html" rel="nofollow noreferrer noopener">Create a VPC peering connection with another VPC in your account</a> for guidance.</p><p>VPC route tables in each VPC require updating, follow the instruction <a href="https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html" rel="nofollow noreferrer noopener">Update your route tables for a VPC peering connection</a> for guidance. For simplicity, it&apos;s recommended to configure route destinations as the IPv4 CIDR block of the peer VPC.</p><p>Security Groups require updating to permit cross-cluster network communication. EKS cluster security groups in each cluster should be updated to permit inbound traffic originating from external clusters. For simplicity, it&apos;s recommended the Cluster 1 &amp; Cluster 2 <a href="https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html" rel="nofollow noreferrer noopener">EKS Cluster Security groups</a> be updated to allow inbound traffic from the IPv4 CIDR block of the peer VPC.</p><blockquote><em>The <a href="https://docs.aws.amazon.com/vpc/latest/reachability/getting-started.html" rel="nofollow noreferrer noopener">VPC Reachability Analyzer</a> can be used to test and diagnose end-end connectivity between worker nodes within each cluster.</em></blockquote><h4 id="enable-eks-oidc-provider">Enable EKS OIDC Provider</h4><p>In order to map required Cloud Map AWS IAM permissions to the MCS-Controller Kubernetes service account, we need to enable the OpenID Connect (OIDC) identity provider in our EKS clusters using <code>eksctl</code>.</p><ul><li>Environment variables REGION and CLUSTERNAME should be configured.</li></ul><p>Run the following commands to enable OIDC providers using <code>eksctl</code>.</p><p>Cluster 1:</p><pre><code class="language-bash">export AWS_REGION=ap-southeast-2
export CLUSTER_NAME=cls1
eksctl utils associate-iam-oidc-provider \
    --region $AWS_REGION \
    --cluster $CLUSTER_NAME \
    --approve</code></pre><p></p><p>Cluster 2:</p><pre><code class="language-bash">export AWS_REGION=ap-southeast-2
export CLUSTER_NAME=cls2
eksctl utils associate-iam-oidc-provider \
    --region $AWS_REGION \
    --cluster $CLUSTER_NAME \
    --approve</code></pre><p></p><h4 id="implement-coredns-multicluster-plugin">Implement CoreDNS multicluster plugin</h4><p>The CoreDNS multicluster plugin implements the <a href="https://github.com/kubernetes/enhancements/pull/2577" rel="nofollow noreferrer noopener">Kubernetes DNS-Based Multicluster Service Discovery Specification</a> which enables CoreDNS to lifecycle manage DNS records for <code>ServiceImport</code> objects. To enable the CoreDNS multicluster plugin within both EKS clusters, perform the following procedure.</p><h4 id="update-coredns-rbac">Update CoreDNS RBAC</h4><p>Run the following command against both clusters to update the <code>system:coredns</code> clusterrole to include access to additional multi-cluster API resources:</p><pre><code class="language-bash">kubectl apply -f config/coredns-clusterrole.yaml</code></pre><p></p><h4 id="update-the-coredns-configmap">Update the CoreDNS configmap</h4><p>Run the following command against both clusters to update the default CoreDNS configmap to include the multicluster plugin directive, and <code>clusterset.local</code> zone:</p><pre><code class="language-bash">kubectl apply -f config/coredns-configmap.yaml</code></pre><p></p><h4 id="update-the-coredns-deployment">Update the CoreDNS deployment</h4><p>Run the following command against both clusters to update the default CoreDNS deployment to use the container image <code>ghcr.io/aws/aws-cloud-map-mcs-controller-for-k8s/coredns-multicluster/coredns:v1.8.4</code> - which includes the multicluster plugin:</p><pre><code class="language-bash">kubectl apply -f config/coredns-deployment.yaml</code></pre><p></p><h4 id="install-the-aws-cloud-map-mcs-controller-for-k8s">Install the aws-cloud-map-mcs-controller-for-k8s</h4><h6 id="configure-mcs-controller-rbac">Configure MCS-Controller RBAC</h6><p>Before the Cloud Map MCS-Controller is installed, we will first pre-provision the controller Service Account, granting IAM access rights <code>AWSCloudMapFullAccess</code> to ensure that the MCS Controller can lifecycle manage Cloud Map resources.</p><ul><li>Environment variable CLUSTER_NAME should be configured.</li></ul><p>Run the following commands to create the MCS-Controller namespace and service accounts in each cluster.</p><p><em>Note: Be sure to change the <code>kubectl</code> context to the correct cluster before issuing commands.</em></p><p>Cluster 1:</p><pre><code class="language-bash">export CLUSTER_NAME=cls1
kubectl create namespace cloud-map-mcs-system
eksctl create iamserviceaccount \
--cluster $CLUSTER_NAME \
--namespace cloud-map-mcs-system \
--name cloud-map-mcs-controller-manager \
--attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
--override-existing-serviceaccounts \
--approve</code></pre><p></p><p>Cluster 2:</p><pre><code class="language-bash">export CLUSTER_NAME=cls2
kubectl create namespace cloud-map-mcs-system
eksctl create iamserviceaccount \
--cluster $CLUSTER_NAME \
--namespace cloud-map-mcs-system \
--name cloud-map-mcs-controller-manager \
--attach-policy-arn arn:aws:iam::aws:policy/AWSCloudMapFullAccess \
--override-existing-serviceaccounts \
--approve</code></pre><p></p><h6 id="install-the-mcs-controller">Install the MCS-Controller</h6><p>Now to install the MCS-Controller.</p><ul><li>Environment variable AWS_REGION should be configured.</li></ul><p>Run the following command against both clusters to install the MCS-Controller latest release:</p><pre><code class="language-bash">export AWS_REGION=ap-southeast-2
kubectl apply -k &quot;github.com/aws/aws-cloud-map-mcs-controller-for-k8s/config/controller_install_release&quot;</code></pre><p></p><h6 id="assign-mcs-api-clusterset-membership-and-cluster-identifier">Assign mcs-api <code>ClusterSet</code> membership and <code>Cluster</code> identifier</h6><p>To ensure that &#xA0;<code>ServiceExport</code> and &#xA0;<code>ServiceImport</code> objects propagate correctly between clusters, each cluster should be configured as a member of a single mcs-api <code>ClusterSet</code> (clusterset1 in our example deployment scenario), and should be assigned a unique mcs-api <code>Cluster</code> Id within the <code>ClusterSet</code> (cls1 &amp; cls2 in our example deployment scenario).</p><ul><li>Environment variable CLUSTER_ID should be configured.</li><li>Environment variable CLUSTERSET_ID should be configured.</li></ul><p>Run the following commands to configure <code>Cluster</code> &#xA0;Id and <code>ClusterSet</code> membership.</p><p>Cluster 1:</p><pre><code class="language-bash">export CLUSTER_ID=cls1
export CLUSTERSET_ID=clusterset1
envsubst &lt; config/mcsapi-clusterproperty.yaml | kubectl apply -f -</code></pre><p></p><p>Cluster 2:</p><pre><code class="language-bash">export CLUSTER_ID=cls2
export CLUSTERSET_ID=clusterset1
envsubst &lt; config/mcsapi-clusterproperty.yaml | kubectl apply -f -</code></pre><p></p><h6 id="create-nginx-hello-service">Create <code>nginx-hello</code> Service</h6><p>Now that the clusters, CoreDNS and the MCS-Controller have been configured, we can create the <code>demo</code> namespace in both clusters and implement the <code>nginx-hello</code> Service and associated Deployment into Cluster 1.</p><p>Run the following commands to prepare the demo environment on both clusters.</p><p><em>Note: be sure to change the <code>kubectl</code> context to the correct cluster before issuing commands.</em></p><p>Cluster 1:</p><pre><code class="language-bash">kubectl create namespace demo
kubectl apply -f config/nginx-deployment.yaml
kubectl apply -f config/nginx-service.yaml</code></pre><p></p><p>Cluster 2:</p><pre><code class="language-bash">kubectl create namespace demo</code></pre><p></p><h3 id="service-provisioning-1">Service Provisioning</h3><p>With the Solution Baseline in place, let&apos;s continue by implementing the Service Provisioning scenario. We&apos;ll create a <code>ServiceExport</code> object in Cluster 1 for the <code>nginx-hello</code> Service. This will trigger the Cluster 1 MCS-Controller to complete service provisioning and propagation into Cloud Map, and subsequent import and provisioning by the MCS-Controller in Cluster 2.</p><h4 id="create-nginx-hello-serviceexport">Create <code>nginx-hello</code> ServiceExport</h4><p>Run the following command against Cluster 1 to to create the <code>ServiceExport</code> object for the <code>nginx-hello</code> Service:</p><pre><code class="language-bash">kubectl apply -f config/nginx-serviceexport.yaml</code></pre><p></p><h4 id="verify-nginx-hello-serviceexport">Verify <code>nginx-hello</code> ServiceExport</h4><p>Let&apos;s verify the <code>ServiceExport</code> creation has succeeded, and that corresponding objects have been created in Cluster 1, Cloud Map, and Cluster 2.</p><h6 id="cluster-1">Cluster 1</h6><p>Inspecting the MCS-Controller logs in Cluster 1, we see that the controller has detected the <code>ServiceExport</code> object, and created the corresponding <code>demo</code> Namespace and <code>nginx-hello</code> Service in Cloud Map:</p><pre><code class="language-bash">$ kubectl logs cloud-map-mcs-controller-manager-5b9f959fc9-hmz88 -c manager --namespace cloud-map-mcs-system
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108812.7046816,&quot;logger&quot;:&quot;cloudmap&quot;,&quot;msg&quot;:&quot;namespace created&quot;,&quot;nsId&quot;:&quot;ns-nlnawwa2wa3ajoh3&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108812.7626762,&quot;logger&quot;:&quot;cloudmap&quot;,&quot;msg&quot;:&quot;service created&quot;,&quot;namespace&quot;:&quot;demo&quot;,&quot;name&quot;:&quot;nginx-hello&quot;,&quot;id&quot;:&quot;srv-xqirlhajwua5vkvo&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108812.7627065,&quot;logger&quot;:&quot;cloudmap&quot;,&quot;msg&quot;:&quot;fetching a service&quot;,&quot;namespace&quot;:&quot;demo&quot;,&quot;name&quot;:&quot;nginx-hello&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108812.8299918,&quot;logger&quot;:&quot;cloudmap&quot;,&quot;msg&quot;:&quot;registering endpoints&quot;,&quot;namespaceName&quot;:&quot;demo&quot;,&quot;serviceName&quot;:&quot;nginx-hello&quot;,&quot;endpoints&quot;:[{&quot;Id&quot;:&quot;tcp-10_10_86_76-80&quot;,&quot;IP&quot;:&quot;10.10.86.76&quot;,&quot;EndpointPort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ServicePort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;80&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ClusterId&quot;:&quot;cls1&quot;,&quot;ClusterSetId&quot;:&quot;clusterset1&quot;,&quot;ServiceType&quot;:&quot;ClusterSetIP&quot;,&quot;ServiceExportCreationTimestamp&quot;:1665108776000,&quot;Ready&quot;:true,&quot;Hostname&quot;:&quot;&quot;,&quot;Nodename&quot;:&quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,&quot;Attributes&quot;:{&quot;K8S_CONTROLLER&quot;:&quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;}},{&quot;Id&quot;:&quot;tcp-10_10_66_181-80&quot;,&quot;IP&quot;:&quot;10.10.66.181&quot;,&quot;EndpointPort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ServicePort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;80&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ClusterId&quot;:&quot;cls1&quot;,&quot;ClusterSetId&quot;:&quot;clusterset1&quot;,&quot;ServiceType&quot;:&quot;ClusterSetIP&quot;,&quot;ServiceExportCreationTimestamp&quot;:1665108776000,&quot;Ready&quot;:true,&quot;Hostname&quot;:&quot;&quot;,&quot;Nodename&quot;:&quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,&quot;Attributes&quot;:{&quot;K8S_CONTROLLER&quot;:&quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;}},{&quot;Id&quot;:&quot;tcp-10_10_78_125-80&quot;,&quot;IP&quot;:&quot;10.10.78.125&quot;,&quot;EndpointPort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ServicePort&quot;:{&quot;Name&quot;:&quot;&quot;,&quot;Port&quot;:80,&quot;TargetPort&quot;:&quot;80&quot;,&quot;Protocol&quot;:&quot;TCP&quot;},&quot;ClusterId&quot;:&quot;cls1&quot;,&quot;ClusterSetId&quot;:&quot;clusterset1&quot;,&quot;ServiceType&quot;:&quot;ClusterSetIP&quot;,&quot;ServiceExportCreationTimestamp&quot;:1665108776000,&quot;Ready&quot;:true,&quot;Hostname&quot;:&quot;&quot;,&quot;Nodename&quot;:&quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,&quot;Attributes&quot;:{&quot;K8S_CONTROLLER&quot;:&quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;}}]}</code></pre><p></p><p>Using the AWS CLI we can verify Namespace and Service resources provisioned to Cloud Map by the Cluster 1 MCS-Controller:</p><pre><code class="language-bash">$ aws servicediscovery list-namespaces
{
    &quot;Namespaces&quot;: [
        {
            &quot;Id&quot;: &quot;ns-nlnawwa2wa3ajoh3&quot;,
            &quot;Arn&quot;: &quot;arn:aws:servicediscovery:ap-southeast-2:911483634971:namespace/ns-nlnawwa2wa3ajoh3&quot;,
            &quot;Name&quot;: &quot;demo&quot;,
            &quot;Type&quot;: &quot;HTTP&quot;,
            &quot;Properties&quot;: {
                &quot;DnsProperties&quot;: {
                    &quot;SOA&quot;: {}
                },
                &quot;HttpProperties&quot;: {
                    &quot;HttpName&quot;: &quot;demo&quot;
                }
            },
            &quot;CreateDate&quot;: &quot;2022-10-07T02:13:32.310000+00:00&quot;
        }
    ]
}
$ aws servicediscovery list-services
{
    &quot;Services&quot;: [
        {
            &quot;Id&quot;: &quot;srv-xqirlhajwua5vkvo&quot;,
            &quot;Arn&quot;: &quot;arn:aws:servicediscovery:ap-southeast-2:911483634971:service/srv-xqirlhajwua5vkvo&quot;,
            &quot;Name&quot;: &quot;nginx-hello&quot;,
            &quot;Type&quot;: &quot;HTTP&quot;,
            &quot;DnsConfig&quot;: {},
            &quot;CreateDate&quot;: &quot;2022-10-07T02:13:32.744000+00:00&quot;
        }
    ]
}
$ aws servicediscovery discover-instances --namespace-name demo --service-name nginx-hello
{
    &quot;Instances&quot;: [
        {
            &quot;InstanceId&quot;: &quot;tcp-10_10_78_125-80&quot;,
            &quot;NamespaceName&quot;: &quot;demo&quot;,
            &quot;ServiceName&quot;: &quot;nginx-hello&quot;,
            &quot;HealthStatus&quot;: &quot;UNKNOWN&quot;,
            &quot;Attributes&quot;: {
                &quot;AWS_INSTANCE_IPV4&quot;: &quot;10.10.78.125&quot;,
                &quot;AWS_INSTANCE_PORT&quot;: &quot;80&quot;,
                &quot;CLUSTERSET_ID&quot;: &quot;clusterset1&quot;,
                &quot;CLUSTER_ID&quot;: &quot;cls1&quot;,
                &quot;ENDPOINT_PORT_NAME&quot;: &quot;&quot;,
                &quot;ENDPOINT_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;HOSTNAME&quot;: &quot;&quot;,
                &quot;K8S_CONTROLLER&quot;: &quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;,
                &quot;NODENAME&quot;: &quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,
                &quot;READY&quot;: &quot;true&quot;,
                &quot;SERVICE_EXPORT_CREATION_TIMESTAMP&quot;: &quot;1665108776000&quot;,
                &quot;SERVICE_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_PORT_NAME&quot;: &quot;&quot;,
                &quot;SERVICE_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;SERVICE_TARGET_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_TYPE&quot;: &quot;ClusterSetIP&quot;
            }
        },
        {
            &quot;InstanceId&quot;: &quot;tcp-10_10_66_181-80&quot;,
            &quot;NamespaceName&quot;: &quot;demo&quot;,
            &quot;ServiceName&quot;: &quot;nginx-hello&quot;,
            &quot;HealthStatus&quot;: &quot;UNKNOWN&quot;,
            &quot;Attributes&quot;: {
                &quot;AWS_INSTANCE_IPV4&quot;: &quot;10.10.66.181&quot;,
                &quot;AWS_INSTANCE_PORT&quot;: &quot;80&quot;,
                &quot;CLUSTERSET_ID&quot;: &quot;clusterset1&quot;,
                &quot;CLUSTER_ID&quot;: &quot;cls1&quot;,
                &quot;ENDPOINT_PORT_NAME&quot;: &quot;&quot;,
                &quot;ENDPOINT_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;HOSTNAME&quot;: &quot;&quot;,
                &quot;K8S_CONTROLLER&quot;: &quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;,
                &quot;NODENAME&quot;: &quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,
                &quot;READY&quot;: &quot;true&quot;,
                &quot;SERVICE_EXPORT_CREATION_TIMESTAMP&quot;: &quot;1665108776000&quot;,
                &quot;SERVICE_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_PORT_NAME&quot;: &quot;&quot;,
                &quot;SERVICE_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;SERVICE_TARGET_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_TYPE&quot;: &quot;ClusterSetIP&quot;
            }
        },
        {
            &quot;InstanceId&quot;: &quot;tcp-10_10_86_76-80&quot;,
            &quot;NamespaceName&quot;: &quot;demo&quot;,
            &quot;ServiceName&quot;: &quot;nginx-hello&quot;,
            &quot;HealthStatus&quot;: &quot;UNKNOWN&quot;,
            &quot;Attributes&quot;: {
                &quot;AWS_INSTANCE_IPV4&quot;: &quot;10.10.86.76&quot;,
                &quot;AWS_INSTANCE_PORT&quot;: &quot;80&quot;,
                &quot;CLUSTERSET_ID&quot;: &quot;clusterset1&quot;,
                &quot;CLUSTER_ID&quot;: &quot;cls1&quot;,
                &quot;ENDPOINT_PORT_NAME&quot;: &quot;&quot;,
                &quot;ENDPOINT_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;HOSTNAME&quot;: &quot;&quot;,
                &quot;K8S_CONTROLLER&quot;: &quot;aws-cloud-map-mcs-controller-for-k8s d07e680 (d07e680)&quot;,
                &quot;NODENAME&quot;: &quot;ip-10-10-77-143.ap-southeast-2.compute.internal&quot;,
                &quot;READY&quot;: &quot;true&quot;,
                &quot;SERVICE_EXPORT_CREATION_TIMESTAMP&quot;: &quot;1665108776000&quot;,
                &quot;SERVICE_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_PORT_NAME&quot;: &quot;&quot;,
                &quot;SERVICE_PROTOCOL&quot;: &quot;TCP&quot;,
                &quot;SERVICE_TARGET_PORT&quot;: &quot;80&quot;,
                &quot;SERVICE_TYPE&quot;: &quot;ClusterSetIP&quot;
            }
        }
    ]
}</code></pre><p></p><h6 id="cluster-2">Cluster 2</h6><p>Inspecting the MCS-Controller logs in Cluster 2, we see that the controller has detected the <code>nginx-hello</code> Cloud Map Service, and created the corresponding Kubernetes <code>ServiceImport</code>:</p><pre><code class="language-bash">$ kubectl logs cloud-map-mcs-controller-manager-5b9f959fc9-v72s4 -c manager --namespace cloud-map-mcs-system
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108822.2781157,&quot;logger&quot;:&quot;controllers.Cloudmap&quot;,&quot;msg&quot;:&quot;created ServiceImport&quot;,&quot;namespace&quot;:&quot;demo&quot;,&quot;name&quot;:&quot;nginx-hello&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108824.2420218,&quot;logger&quot;:&quot;controllers.Cloudmap&quot;,&quot;msg&quot;:&quot;created derived Service&quot;,&quot;namespace&quot;:&quot;demo&quot;,&quot;name&quot;:&quot;imported-9cfu7k5mkr&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108824.2501283,&quot;logger&quot;:&quot;controllers.Cloudmap&quot;,&quot;msg&quot;:&quot;ServiceImport IPs need update&quot;,&quot;ServiceImport IPs&quot;:[],&quot;cluster IPs&quot;:[&quot;172.20.80.119&quot;]}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1665108824.2618752,&quot;logger&quot;:&quot;controllers.Cloudmap&quot;,&quot;msg&quot;:&quot;updated ServiceImport&quot;,&quot;namespace&quot;:&quot;demo&quot;,&quot;name&quot;:&quot;nginx-hello&quot;,&quot;IP&quot;:[&quot;172.20.80.119&quot;],&quot;ports&quot;:[{&quot;protocol&quot;:&quot;TCP&quot;,&quot;port&quot;:80}]}</code></pre><p></p><p>Inspecting the Cluster 2 Kubernetes <code>ServiceImport</code> object:</p><pre><code class="language-bash">$ kubectl get serviceimports.multicluster.x-k8s.io nginx-hello -n demo -o yaml
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  annotations:
    multicluster.k8s.aws/derived-service: &apos;[{&quot;cluster&quot;:&quot;cls1&quot;,&quot;derived-service&quot;:&quot;imported-9cfu7k5mkr&quot;}]&apos;
  creationTimestamp: &quot;2022-10-07T02:13:42Z&quot;
  generation: 2
  name: nginx-hello
  namespace: demo
  resourceVersion: &quot;12787&quot;
  uid: a53901af-57a8-49c7-aeb1-f67c4a44c2d2
spec:
  ips:
  - 172.20.80.119
  ports:
  - port: 80
    protocol: TCP
  type: ClusterSetIP
status:
  clusters:
  - cluster: cls1</code></pre><p></p><p>And the corresponding Cluster 2 Kubernetes Endpoint Slice:</p><pre><code class="language-bash">$ kubectl get endpointslices.discovery.k8s.io -n demo
NAME                        ADDRESSTYPE   PORTS   ENDPOINTS                               AGE
imported-9cfu7k5mkr-dc7q9   IPv4          80      10.10.78.125,10.10.86.76,10.10.66.181   14m</code></pre><p></p><p>Important points to note:</p><ul><li>the <code>ServiceImport</code> Service is assigned an IP address from the local Kubernetes service IPv4 CIDR: 172.22.0.0/16 (172.20.80.119) so as to permit service discovery and access to the remote service endpoints from within the local cluster.</li><li>the endpoint IP addresses match those of the <code>nginx-demo</code> Endpoints in Cluster 1 (i.e. from the Cluster 1 VPC CIDR: 10.10.0.0/16).</li></ul><h3 id="service-consumption-1">Service Consumption</h3><p>With the Solution Baseline and Service Provisioning in place, workloads in Cluster 2 are now able to consume the <code>nginx-hello</code> Service Endpoints located in Cluster 1 via the locally provisioned <code>ServiceImport</code> object. To complete the Service Consumption deployment scenario we&apos;ll deploy the <code>client-hello</code> Pod into Cluster 2, and observe how it&apos;s able to perform cross-cluster service consumption of the <code>nginx-hello</code> Service Endpoints in Cluster 1.</p><h4 id="create-client-hello-pod">Create <code>client-hello</code> Pod</h4><p>Run the following command against Cluster 2 create the <code>client-hello</code> Pod:</p><pre><code class="language-bash">kubectl apply -f config/client-hello.yaml</code></pre><p></p><h4 id="verify-multi-cluster-service-consumption">Verify <code>multi-cluster</code> service consumption</h4><p>Let&apos;s exec into the <code>client-hello</code> Pod and perform an <code>nslookup</code> to cluster-local CoreDNS for the <code>ServiceImport</code> Service <code>nginx-hello.demo.svc.clusterset.local</code>:</p><pre><code class="language-bash">$ kubectl exec -it client-hello -n demo /bin/sh
/ # nslookup nginx-hello.demo.svc.clusterset.local
Server:         172.20.0.10
Address:        172.20.0.10:53

Name:   nginx-hello.demo.svc.clusterset.local
Address: 172.20.80.119</code></pre><p></p><p>Note that the Pod resolves the address of the <code>ServiceImport</code> object on Cluster 2.</p><p>Finally, we generate HTTP requests from the <code>client-hello</code> Pod to the local <code>nginx-hello</code> <code>ServiceImport</code> Service:</p><pre><code class="language-bash">/ # apk --no-cache add curl
/ # curl nginx-hello.demo.svc.clusterset.local
Server address: 10.10.86.76:80
Server name: nginx-demo-59c6cb8d7b-m4ktw
Date: 07/Oct/2022:02:31:45 +0000
URI: /
Request ID: 17d43e6e8801a98d05059dfaf88d0abe
/ # 
/ # curl nginx-hello.demo.svc.clusterset.local
Server address: 10.10.78.125:80
Server name: nginx-demo-59c6cb8d7b-8w6rp
Date: 07/Oct/2022:02:32:26 +0000
URI: /
Request ID: 0ddc09ffe7fd45c52903ce34c955f555
/ # 
/ # curl nginx-hello.demo.svc.clusterset.local
Server address: 10.10.66.181:80
Server name: nginx-demo-59c6cb8d7b-mtm8l
Date: 07/Oct/2022:02:32:53 +0000
URI: /
Request ID: 2fde1c34008a5ec18b8ae23797489c3a</code></pre><p></p><p>Note that the responding Server Names and Server addresses are those of the <code>nginx-demo</code> Pods on Cluster 1 - confirming that the requests to the local <code>ClusterSetIP</code> at <code>nginx-hello.demo.svc.clusterset.local</code> originating on Cluster 2 are proxied cross-cluster to the Endpoints located on Cluster 1!</p><h2 id="conclusion">Conclusion</h2><p>The proliferation of container adoption is presenting new challenges in supporting workloads that have broken through the perimeter of the single cluster construct.</p><p>For teams that are looking to implement a Kubenetes-centric approach to managing multi-cluster workloads, the mcs-api describes an effective approach to extending the scope of the service resource concept beyond the cluster boundary - providing a mechanism to weave multiple clusters together using standard (and familiar) DNS based service discovery.</p><p>The <a href="https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s" rel="nofollow noreferrer noopener">AWS Cloud Map MCS Controller for Kubernetes</a> is an open source project that integrates with AWS Cloud Map to offer a decentralised implementation of the multi-cluster services API specification that&apos;s particularly suited for teams looking for a lightweight and effective Kubenetes-centric mechanism to deploy multi-cluster workloads to the AWS cloud.</p><p>Photo by <a href="https://unsplash.com/@susan_wilkinson?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Susan Wilkinson</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Amazon ECS External Instance Network Sentry (eINS)]]></title><description><![CDATA[The eINS has been designed to provide an additional layer of resilience for ECS Anywhere external instances in deployment scenarios where connectivity to the on-region ECS control-plane may be unreliable or intermittent.]]></description><link>https://blog.bytequalia.com/aws-ecs-anywhere-network-sentry/</link><guid isPermaLink="false">612f2fb69e996e0001a3e4ba</guid><category><![CDATA[Amazon ECS]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Sat, 11 Sep 2021 03:18:15 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2021/09/solen-feyissa-dNcpjPVjsoY-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2021/09/solen-feyissa-dNcpjPVjsoY-unsplash-1.jpg" alt="Amazon ECS External Instance Network Sentry (eINS)"><p>The eINS has been designed to provide an additional layer of resilience for ECS external instances in deployment scenarios where connectivity to the on-region ECS control-plane may be unreliable or intermittent.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2021/09/solen-feyissa-dNcpjPVjsoY-unsplash.jpg" class="kg-image" alt="Amazon ECS External Instance Network Sentry (eINS)" loading="lazy"></figure><p>Deploying the eINS to ECS external instances will ensure that during periods where there is a loss of connectivity to the on-region ECS control-plane, any ECS managed containers which exit due to error will be automatically restarted.</p><blockquote>The <a href="https://github.com/aws-samples/ecs-external-instance-network-sentry">ecs-external-instance-network-sentry</a> git repository contains the application source code and end-user guide for configuring and deploying the ECS External Instance Network Sentry (eINS).</blockquote><h2 id="introduction">Introduction</h2><p>ECS Anywhere is an extension of Amazon ECS that will allow customers to deploy native Amazon ECS tasks in any environment. This includes the existing model on AWS managed infrastructure, as well as customer-managed infrastructure.</p><p>When extending ECS to customer managed infrastructure, external instances are registered to the ECS cluster. External instances are compute resources (hosts) external to an AWS region where ECS can schedule tasks to run. External instances are typically an on-premises server or virtual machine (VM).</p><p>During normal operation, where there is network connectivity between an ECS external instance and the on-region ECS control-plane - ECS monitors for errors or failures that occur to managed containers running on external instances and will restart any containers which have stopped due to an error.</p><p>For the duration of time that an ECS external instance loses network connectivity to the ECS on-region control-plane, any managed containers which have stopped due to an error will not be restarted by ECS until the point in time that network connectivity to the ECS on-region control-plane has been restored.</p><p>The eINS has been designed to detect any loss of connectivity to the on-region ECS control-plane, and to proactively ensure that for the duration of the outage that ECS managed containers which stop due to an error condition are automatically restarted.</p><h2 id="overview">Overview</h2><p>The eINS is a Python application which can either be run manually, or be configured to run as a service on ECS Anywhere external instances. <em>See the <a href="https://github.com/aws-samples/ecs-external-instance-network-sentry#Installation">Installation</a> section below for instruction for both deployment scenarios.</em></p><h3 id="connected-operation">Connected Operation</h3><p>The eINS periodically attempts to establish a TLS connection with the ECS on-region control-plane to determine region availability status, and the on-region ECS control-plane responds without error.</p><figure class="kg-card kg-image-card"><img src="https://github.com/aws-samples/ecs-external-instance-network-sentry/raw/main/images/eins-normal-operation-v0.01.png" class="kg-image" alt="Amazon ECS External Instance Network Sentry (eINS)" loading="lazy" title="eINS: Normal Operation"></figure><p>In reference to the diagram:</p><!--kg-card-begin: markdown--><ul>
<li>eINS TLS connection with the ECS on-region control-plane [1] completes successfully:
<ul>
<li>eINS takes no further action.</li>
</ul>
</li>
<li>In communication with the on-region control-plane [2] the ECS agent on the external instance orchestrates local managed container lifecycle, including restarting containers which exit due to error condition [3].</li>
</ul>
<!--kg-card-end: markdown--><h3 id="disconnected-operation">Disconnected Operation</h3><p>The eINS periodically attempts to establish a TLS connection with the ECS on-region control-plane to determine region availability status, and the on-region ECS control-plane either does not respond, or returns an error.</p><figure class="kg-card kg-image-card"><img src="https://github.com/aws-samples/ecs-external-instance-network-sentry/raw/main/images/eins-no-connectivity-v0.01.png" class="kg-image" alt="Amazon ECS External Instance Network Sentry (eINS)" loading="lazy" title="eINS: Normal Operation"></figure><p>In reference to the diagram:</p><!--kg-card-begin: markdown--><ul>
<li>eINS TLS connection with the ECS on-region control-plane [1] experiences timeout or return error condition:
<ul>
<li>The ECS agent is paused [3] via the local Docker API [2]*.</li>
<li>eINS updates Docker restart policy to <code>on-failure</code> for each ECS managed container [4]. This ensures that containers exiting with an error code will be automatically restarted by the Docker daemon.</li>
</ul>
</li>
<li>When the ECS control-plane becomes reachable:
<ul>
<li>ECS managed containers that have been automatically restarted by the Docker daemon during network outage are stopped and removed.**</li>
<li>ECS managed containers that have not been automatically restarted during network outage have their Docker restart policy set back to <code>no</code>.</li>
<li>The local ECS agent is un-paused.
<blockquote>
<p><em>At this point the operational environment has been restored back to the <a href="#Connected-Operation">Connected Operation</a> scenario. eINS will continue to monitor for network outage or ECS control-plane error.</em></p>
</blockquote>
</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h4 id="notes">Notes</h4><p>*ECS agent is paused, as if left in a running state the agent will detect and kill ECS managed containers that have been restarted by the Docker daemon during the period of network outage.</p><p>**These containers are stopped and removed by eINS to avoid duplication:</p><ul><li>Containers that have been restarted by the Docker daemon during a network outage become orphaned by ECS once back online.</li><li>The related ECS tasks are re-launched by ECS on the external instance once the ECS agent has established communication with the control-plane.</li></ul><h3 id="configuration-parameters">Configuration Parameters</h3><p>The eINS provides the ability to submit configuration parameters as command line arguments. Running the application with the <code>--help</code> parameter generates a summary of available parameters:</p><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --help
usage: ecs-external-instance-network-sentry [-h] -r REGION [-i INTERVAL] [-n RETRIES] [-l LOGFILE] [-k LOGLEVEL]

Purpose:
--------------
For use on ECS Anywhere external hosts:
Configures ECS orchestrated containers to automatically restart
on failure when on-region ecs control-plane is detected to be unreachable.

Configuration Parameters:
--------------
  -h, --help            Show this help message and exit.
  -r REGION, --region REGION
                        AWS region where ecs cluster is located.
  -i INTERVAL, --interval INTERVAL
                        Interval in seconds sentry will sleep between connectivity checks.
  -n RETRIES, --retries RETRIES
                        Number of times Docker will restart a crashing container.
  -l LOGFILE, --logfile LOGFILE
                        Logfile name &amp; location.
  -k LOGLEVEL, --loglevel LOGLEVEL
                        Log data event severity.</code></pre><p></p><p>Configuration parameters are described in further detail following:</p><p><code><strong>--region</strong></code><br>Provide the name of the AWS region where the ECS cluster that manages the external instance is hosted. eINS will attempt to establish a TLS connection to the ECS public endpoint at the nominated region to evaluate ECS control-plane availability.</p><ul><li>optional=no</li><li>default=&quot;&quot;</li></ul><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --region ap-southeast-2</code></pre><p></p><p><code><strong>--interval</strong></code><br>Specify the number of seconds between connectivity tests.</p><ul><li>optional=yes</li><li>default=20</li></ul><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --region ap-southeast-2 --interval 15</code></pre><p></p><p><code><strong>--retries</strong></code><strong><br></strong>Specify the number of times failing containers will be restarted during periods where the ECS control-plane is unavailable. The default setting is <code>0</code> which configures the Docker daemon to restart containers an unlimited number of times.</p><ul><li>optional=yes</li><li>default=0</li></ul><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --region ap-southeast-2 --interval 15 --retries 5</code></pre><p></p><p><code><strong>--logfile</strong></code><strong><br></strong>Specify logfile name and file-system path. The default value is /tmp/ecs-anywhere-network-sentry.log.</p><ul><li>optional=yes</li><li>default=/tmp/ecs-external-instance-network-sentry.log</li></ul><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --region ap-southeast-2 --interval 15 --retries 5 --logfile /mypath/myfile.log</code></pre><p></p><p><code><strong>--loglevel</strong></code><br>Specify log data event severity.</p><ul><li>optional=yes</li><li>default=INFO</li></ul><pre><code class="language-bash">$ python3 ecs-external-instance-network-sentry.py --region ap-southeast-2 --interval 15 --retries 5 --logfile /mypath/myfile.log --loglevel DEBUG</code></pre><p></p><h2 id="installation">Installation</h2><p>It&apos;s recommended that the external instance first be registered with ECS before installing the eINS. Installation instructions for eINS are provided below in the correct order of precedence.</p><blockquote><em>Commands provided assume that the external instance host operating system is Ubuntu 20.</em></blockquote><h3 id="prerequisites">Prerequisites</h3><p>The following prerequisites should be implemented prior to deploying the eINS.</p><h4 id="ecs-anywhere">ECS Anywhere</h4><p>For each external instance you register with an Amazon ECS cluster, it requires the SSM Agent, the Amazon ECS container agent, and Docker installed. To register the external instance to an Amazon ECS cluster, it must first be registered as an AWS Systems Manager managed instance. You can generate the comprehensive installation script in a few clicks on the Amazon ECS console. Follow the instructions as described <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere-registration.html" rel="nofollow">here</a>.</p><h4 id="python">Python</h4><p>The eINS has been developed and tested running on Python version 3.8.10.</p><h4 id="python-docker-sdk">Python Docker SDK</h4><p>The eINS interacts with the Docker API, which requires installation of the Python Docker SDK on each external instance where the eINS will run. To install the Python Docker SDK, run the commands as follows:</p><pre><code class="language-bash"># update package index files..
$ apt get update
# install python docker sdk..
$ python3 pip install docker</code></pre><p></p><h4 id="clone-the-eins-git-repository">Clone the eINS git repository</h4><p>On the ECS external instance, clone the ecs-external-instance-network-sentry repository:</p><pre><code class="language-bash"># clone eins git repo..
$ git clone https://github.com/aws-samples/ecs-external-instance-network-sentry.git</code></pre><p></p><blockquote><em>Commands from this point forward will assume that you are in the root directory of the local git repository clone.</em></blockquote><h3 id="manual-operation">Manual Operation</h3><p>At this point the external instance host operating system is ready to run the eINS. For testing or evaluation the application can be launched manually.</p><p>The application is located within the <code>/python</code> directory of the git repository. See the <a href="https://github.com/aws-samples/ecs-external-instance-network-sentry#Configuration-Parameters">Configuration Parameters</a> section for required and optional parameters to be submitted at runtime. Remember to provide the correct AWS region code:</p><pre><code class="language-bash"># manual launch..
$ python3 /python/ecs-external-instance-network-sentry.py --region ap-southeast-2</code></pre><p></p><h3 id="background-service">Background Service</h3><p>Configuring the application as an OS background service is an effective mechanism to ensure that the eINS remains running in the background at all times.</p><p>Service configuration requires the implementation of a unit configuration file which encodes information about the process that will be controlled and supervised by systemd.</p><h3 id="configuration-procedure">Configuration Procedure</h3><p>The following describes configuring the eINS as an OS background service.</p><h4 id="copy-application-and-configuration-files">Copy application and configuration files</h4><p>Run the following commands to copy application and configuration files to the appropriate locations on the external instance file system:</p><pre><code class="language-bash"># copy eins application file..
$ cp /python/ecs-external-instance-network-sentry.py /usr/bin
# copy eins service unit config file..
$ cp /config/ecs-external-instance-network-sentry.service /lib/systemd/system</code></pre><p></p><h4 id="update-service-unit-configuration-file">Update service unit configuration file</h4><p>Next, update the service unit configuration file <code>/lib/systemd/system/ecs-external-instance-network-sentry.service</code>.</p><pre><code class="language-bash">$ cat /lib/systemd/system/ecs-external-instance-network-sentry.service

[Unit]
Description=Amazon ECS External Instance Network Service Documentation=https://github.com/aws-samples/ecs-external-instance-network-sentry Requires=docker.service
After=ecs.service

[Service]
Type=simple
Restart=on-failure RestartSec=10s
ExecStart=python3 /usr/bin/ecs-external-instance-network-sentry.py --region &lt;INSERT-REGION-NAME-HERE&gt;
[Install] WantedBy=multi-user.target</code></pre><p></p><p>Make necessary modifications to the service unit config file <code>ExecStart</code> directive on line-11 as follows:</p><ul><li>Update the <code>--region</code> configuration parameter with the AWS region name where your on-region ECS cluster is provisioned.</li><li>Optionally, include any additional <a href="https://github.com/aws-samples/ecs-external-instance-network-sentry#Configuration-Parameters">Configuration Parameters</a> to suit the particular requirements of your deployment scenario.</li></ul><h4 id="configure-and-start-service">Configure and start service</h4><pre><code class="language-bash"># reload systemd..
$ systemctl daemon-reload # enable eins service..
$ sudo systemctl enable ecs-external-instance-network-sentry.service
# start eins service..
$ systemctl start ecs-external-network-sentry</code></pre><p></p><h4 id="check-service-status">Check service status</h4><p>To validate that the service has started successfully, run the following command. If the service has started correctly, the output should be similar to the following:</p><pre><code class="language-bash">$ systemctl status ecs-external-instance-network-sentry

&#x25CF; ecs-external-instance-network-sentry.service - Amazon ECS External Instance Network Service
     Loaded: loaded (/lib/systemd/system/ecs-external-instance-network-sentry.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-07-30 07:57:08 UTC; 22min ago
       Docs: https://github.com/aws-samples/ecs-external-instance-network-sentry
   Main PID: 28366 (python3)
      Tasks: 1 (limit: 9412)
     Memory: 19.7M
     CGroup: /system.slice/ecs-external-instance-network-sentry.service
             &#x2514;&#x2500;28366 /usr/bin/python3 /usr/bin/ecs-external-instance-network-sentry.py --region ap-southeast-2 --interval 10 --retries 3 --logfile /tmp/ecs-&gt;

Jul 30 07:57:08 ubu20 systemd[1]: Started Amazon ECS External Instance Network Service.</code></pre><h2 id="logging">Logging</h2><p>The eINS has been configured to provide basic logging regarding it&apos;s operation.</p><p>The default logfile location is <code>/tmp/ecs-external-instance-network-sentry.log</code>, which can be modified by submitting the <code>--logfile</code> configuration parameter.</p><h3 id="log-level">Log Level</h3><p>By default, the loglevel is set to <code>logging.INFO</code> and can be updated at runtime using the <code>--loglevel</code> configuration parameter.</p><h3 id="log-output">Log Output</h3><p>The following eINS logfile excerpt illustrates;</p><ul><li>A detected loss of connectivity to on-region control-plane, and associated Docker policy configuration actions for ECS managed containers.</li><li>Container cleanup and Docker policy configuration once ECS control-plane becomes reachable.</li></ul><pre><code class="language-bash">2021-07-10 09:00:01,200 INFO PID_713928 [startup] ecs-external-instance-network-sentry - starting..
2021-07-10 09:00:01,200 INFO PID_713928 [startup] arg - aws region: ap-southeast-2
2021-07-10 09:00:01,200 INFO PID_713928 [startup] arg - interval: 10
2021-07-10 09:00:01,201 INFO PID_713928 [startup] arg - retries: 0
2021-07-10 09:00:01,201 INFO PID_713928 [startup] arg - logfile: /tmp/ecs-external-instance-network-sentry.log
2021-07-10 09:00:01,201 INFO PID_713928 [startup] arg - loglevel: logging.INFO
...
...
2021-07-10 09:39:33,756 INFO PID_713928 [begin] connectivity test..
2021-07-10 09:39:33,757 INFO PID_713928 [connect] connecting to ecs at ap-southeast-2..
2021-07-10 09:39:33,757 INFO PID_713928 [connect] create network socket..
2021-07-10 09:39:43,764 ERROR PID_713928 [connect] error creating network socket: [Errno -3] Temporary failure in name resolution
2021-07-10 09:39:43,764 INFO PID_713928 [connect] connecting to host..
2021-07-10 09:39:43,765 INFO PID_713928 [ecs-offline] ecs unreachable, configuring container restart policy..
2021-07-10 09:39:43,880 INFO PID_713928 [ecs-offline] container name: ecs-alpine-crash-test-9adba798f5f189968701
2021-07-10 09:39:43,881 INFO PID_713928 [ecs-offline] ecs cluster: ecs-anywhere-cluster-1
2021-07-10 09:39:43,882 INFO PID_713928 [ecs-offline] set container restart policy: {&apos;Name&apos;: &apos;on-failure&apos;, &apos;MaximumRetryCount&apos;: 0}
2021-07-10 09:39:43,958 INFO PID_713928 [ecs-offline] container name: ecs-nginx-1-nginx-eaa6e7a9b0cd88988201
2021-07-10 09:39:43,959 INFO PID_713928 [ecs-offline] ecs cluster: ecs-anywhere-cluster-1
2021-07-10 09:39:43,959 INFO PID_713928 [ecs-offline] set container restart policy: {&apos;Name&apos;: &apos;on-failure&apos;, &apos;MaximumRetryCount&apos;: 0}
2021-07-10 09:39:44,022 INFO PID_713928 [ecs-offline] ecs agent paused..
2021-07-10 09:39:44,022 INFO PID_713928 [end] sleeping for 10 seconds..
...
...
2021-07-10 09:41:14,298 INFO PID_713928 [begin] connectivity test..
2021-07-10 09:41:14,299 INFO PID_713928 [connect] connecting to ecs at ap-southeast-2..
2021-07-10 09:41:14,299 INFO PID_713928 [connect] create network socket..
2021-07-10 09:41:23,133 INFO PID_713928 [connect] connecting to host..
2021-07-10 09:41:23,258 INFO PID_713928 [connect] send/receive data..
2021-07-10 09:41:30,563 INFO PID_713928 [connect] ecs at ap-southeast-2 is available..
2021-07-10 09:41:30,564 INFO PID_713928 [ecs-online] ecs is reachable..
2021-07-10 09:41:30,621 INFO PID_713928 [ecs-online] container name: ecs-alpine-crash-test-9adba798f5f189968701
2021-07-10 09:41:30,621 INFO PID_713928 [ecs-online] ecs cluster: ecs-anywhere-cluster-1
2021-07-10 09:41:30,622 INFO PID_713928 [ecs-online] container has been restarted by docker, stopping &amp; removing..
2021-07-10 09:41:41,330 INFO PID_713928 [ecs-online] container name: ecs-nginx-1-nginx-eaa6e7a9b0cd88988201
2021-07-10 09:41:41,330 INFO PID_713928 [ecs-online] ecs cluster: ecs-anywhere-cluster-1
2021-07-10 09:41:41,331 INFO PID_713928 [ecs-online] set container restart policy: {&apos;Name&apos;: &apos;no&apos;, &apos;MaximumRetryCount&apos;: 0}
2021-07-10 09:41:41,470 INFO PID_713928 [ecs-online] ecs agent unpaused..
2021-07-10 09:41:41,471 INFO PID_713928 [end] sleeping for 10 seconds..</code></pre><p></p><h3 id="log-rotation">Log Rotation</h3><p>Logfile will rotate at 5Mb and a history of the five most recent logfiles will be maintained.</p><h2 id="considerations">Considerations</h2><p>The eINS currently has the following limitations:</p><ul><li>During periods where the ECS control-plane is unavailable, and there is either an external instance OS reboot or Docker daemon restart: eINS will not start previously running ECS managed containers.</li><li>As described in the <a href="https://github.com/aws-samples/ecs-external-instance-network-sentry#Disconnected-Operation">Disconnected Operation</a> section, containers that have been restarted during a period where the ECS control-plane is unavailable will be stopped once the ECS control-plane becomes available.</li></ul><p>Photo by <a href="https://unsplash.com/@solenfeyissa?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Solen Feyissa</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Secure workload isolation with Amazon EKS Distro and Kata Containers]]></title><description><![CDATA[Combining Kata Containers with Amazon EKS Distro provides secure VM workload isolation on the same secure Kubernetes distribution that runs on Amazon EKS.]]></description><link>https://blog.bytequalia.com/eksd-kata-containers/</link><guid isPermaLink="false">6083793c811ef10001deb351</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Sat, 01 May 2021 07:24:16 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2021/05/malena-gonzalez-serena-YBffurnz4KQ-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<blockquote>The <a href="https://gitlab.com/byteQualia/eksd-kata-containers">eksd-kata-containers</a> git repository provides a comprehensive step-by-step work instruction with all referenced configuration files and scripts.</blockquote><h2 id="introduction">Introduction</h2><img src="https://blog.bytequalia.com/content/images/2021/05/malena-gonzalez-serena-YBffurnz4KQ-unsplash-1.jpg" alt="Secure workload isolation with Amazon EKS Distro and Kata Containers"><p>Containers have introduced a paradigm shift in how we work with applications, and due to the additional efficiencies in deployment, packaging, and development, the rate of adoption has been skyrocketing.</p><p>Many people new to containerisation tend to adopt the mental model that containers are simply a better and faster way of of running virtual machines (VMs). In many respects this analogy holds up (albeit from a very simplistic point of view), however from a security perspective, the two technologies provide a very different posture.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2021/05/malena-gonzalez-serena-YBffurnz4KQ-unsplash.jpg" class="kg-image" alt="Secure workload isolation with Amazon EKS Distro and Kata Containers" loading="lazy"></figure><p>Standard Linux containers allow applications to make system calls directly to the host operating system (OS) kernel, in a similar way that non-containerised applications do - whereas in a VM environment, processes in a virtual machine simply do not have visibility of the host OS kernel.</p><p>If you&apos;re not running untrusted code in your containers, or hosting a multi-tenant platform; and you&apos;ve implemented good security practices for the services running within each container, you probably don&apos;t need to worry.</p><p>But for those of us that are faced with the challenge of needing to run untrusted code in our containers, or perhaps are hosting a multi-tenant platform - providing the highest levels of isolation between workloads in a Kubernetes environment can be challenging.</p><p>An effective approach to improve workload isolation is to run each Pod within its own dedicated VM. This provides each Pod with a dedicated hypervisor, OS kernel, memory, and virtualized devices which are completely separate from the host OS. In this deployment scenario, when there&apos;s a vulnerability in the containerised workload - the hypervisor within the Pod provides a security boundary which protects the host operating system, as well as other workloads running on the host.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2021/05/kata-vs-traditional.png" class="kg-image" alt="Secure workload isolation with Amazon EKS Distro and Kata Containers" loading="lazy"><figcaption>Image courtesy of <a href="https://katacontainers.io">https://katacontainers.io</a></figcaption></figure><p>If you&apos;re running on the AWS cloud, Amazon have made this approach very simple. Scheduling Pods using the managed Kubernetes service <a href="https://aws.amazon.com/eks">EKS</a> with <a href="https://aws.amazon.com/fargate/">Fargate</a> actually ensures that each Kubernetes Pod is automatically encapsulated inside it&apos;s own dedicated VM. This provides the highest level of isolation for each containerised workload.</p><p>If you need to provide a similar level of workload isolation as EKS with Fargate when operating outside of the AWS cloud (e.g. on premises, or at the edge in a hybrid deployment scenario), then <a href="https://katacontainers.io">Kata Containers</a> is a technology worth considering. Kata Containers is an implementation of a lightweight VM that seamlessly integrates with the container ecosystem, and can be used by Kubernetes to schedule Pods inside of VMs.</p><p>The following tutorial will take you through a deployment scenario where we bootstrap a Kubernetes cluster using Amazon EKS Distro (EKS-D), and configure Kubernetes to be capable of scheduling Pods inside VMs using Kata Containers.</p><p>This is a deployment pattern that can be adopted to provide a very high degree of workload isolation when provisioning clusters outside of the AWS cloud, for example on-premises, edge locations, or on alternate cloud platforms:</p><ul><li>EKS-D provides the same software that has enabled tens of thousands of Kubernetes clusters on Amazon EKS. This includes the latest upstream updates, as well as extended security patching support.</li><li>In-cluster workload isolation is further enhanced by providing the ability to schedule Pods inside a dedicated VM using Kata Containers.</li></ul><h3 id="about-kata-containers">About Kata Containers</h3><p>Kata Containers utilizes open source hypervisors as an isolation boundary for each container (or collection of containers in a Pod).</p><p>With Kata Containers, a second layer of isolation is created on top of those provided by traditional namespace containers. The hardware virtualization interface is the basis of this additional layer. Kata launches a lightweight virtual machine, and uses the VM guest&#x2019;s Linux kernel to create a container workload, or workloads in the case of multi-container Pods. In Kubernetes and in the Kata implementation, the sandbox is implemented at the Pod level. In Kata, this sandbox is created using a virtual machine.</p><p>Kata currently supports <a href="https://github.com/kata-containers/kata-containers/blob/main/docs/hypervisors.md">multiple hypervisors</a>, including: QEMU/KVM, Cloud Hypervisor/KVM, and Firecracker/KVM.</p><h4 id="kata-containers-with-kubernetes">Kata Containers with Kubernetes</h4><p>Kubernetes Container Runtime Interface (CRI) implementations allow using any OCI-compatible runtime with Kubernetes, such as the Kata Containers runtime. Kata Containers support both the <a href="https://github.com/kubernetes-incubator/cri-o">CRI-O</a> and <a href="https://github.com/containerd/cri">CRI-containerd</a> CRI implementations.</p><p>Kata Containers 1.5 introduced the shimv2 for containerd 1.2.0, reducing the components required to spawn Pods and containers. This is currently the preferred way to run Kata Containers with Kubernetes.</p><p>When configuring Kubernetes to integrate with Kata, typically a Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/"><code>RuntimeClass</code></a> is created. The <code>RuntimeClass</code> provides the ability to select the container runtime configuration to be used for a given workload via the Pod spec submitted to the Kubernetes API.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.bytequalia.com/content/images/2021/05/kata-shim-v2.png" class="kg-image" alt="Secure workload isolation with Amazon EKS Distro and Kata Containers" loading="lazy"><figcaption>Image courtesy of <a href="https://katacontainers.io">https://katacontainers.io</a></figcaption></figure><h4 id="about-amazon-eks-distro">About Amazon EKS Distro</h4><p><a href="https://distro.eks.amazonaws.com">Amazon EKS Distro</a> is a Kubernetes distribution used by Amazon EKS to help create reliable and secure clusters. EKS Distro includes binaries and containers from open source Kubernetes, etcd (cluster configuration database), networking, storage, and plugins, all tested for compatibility. You can deploy EKS Distro wherever your applications need to run.</p><p>You can deploy EKS Distro clusters and let AWS take care of testing and tracking Kubernetes updates, dependencies, and patches. The source code, open source tools, and settings are provided for consistent, reproducible builds. EKS Distro provides extended support for Kubernetes, with builds of previous versions updated with the latest security patches. EKS Distro is available as open source on <a href="https://github.com/aws/eks-distro">GitHub</a>.</p><h2 id="tutorial">Tutorial</h2><h3 id="overview">Overview</h3><p>This tutorial will guide you through the following procedure:</p><ul><li>Installing Kata Containers onto a bare metal host</li><li>Installing and configuring containerd to integrate with Kata Containers</li><li>Bootstrapping an EKS Disto Kubernetes cluster using kubeadm</li><li>Configuring a Kubernetes RuntimeClass to schedule Pods to Kata VMs running the QEMU/KVM hypervisor</li></ul><blockquote>The example EKS-D cluster deployment uses kubeadm to bring up the control-plane, which may not be your preferred method to bootstrap a cluster in an environment outside of a managed cloud provider. &#xA0;A number of AWS partners are also providing installation support for EKS Distro, including: Canonical (MicroK8s), Kubermatic (KubeOne), Kubestack, Nirmata, Rancher, and Weaveworks. For further information, see the <a href="https://distro.eks.amazonaws.com/users/install/partners/">Partners section</a> at the EKS Distro website.</blockquote><h3 id="prerequisites">Prerequisites</h3><p>Kubeadm is a tool built to provide <code>kubeadm init</code> and <code>kubeadm join</code> as best-practice &quot;fast paths&quot; for creating Kubernetes clusters.</p><p>You will need to use a Linux system that kubeadm supports, as described in the <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/">kubeadm documentation</a> to verify that the system has the required amount of memory, CPU, and other resources.</p><p>Kata Containers requires nested virtualization or bare metal. Review the <a href="https://github.com/kata-containers/kata-containers/blob/main/src/runtime/README.md#hardware-requirements">hardware requirements</a> to see if your system is capable of running Kata Containers.</p><h4 id="clone-the-eksd-kata-containers-git-repository">Clone the <code><strong>eksd-kata-containers</strong></code> git repository</h4><p>Sample configuration files will be used through the course of the tutorial, which have been made available within the eksd-kata-containers repository.</p><p>Clone the eksd-kata-containers repository to the host on which you will be bootstrapping the cluster:</p><pre><code class="language-bash">git clone https://gitlab.com/byteQualia/eksd-kata-containers.git
</code></pre><p></p><h3 id="bootstrap-the-cluster">Bootstrap the Cluster</h3><p>Next, we bootstrap the cluster using kubeadm.</p><h4 id="prepare-the-host">Prepare the host</h4><p>Make sure SELinux is disabled by setting SELINUX=disabled in the /etc/sysconfig/selinux file. To turn it off immediately, type:</p><pre><code class="language-bash">sudo setenforce 0
</code></pre><p></p><p>Make sure that swap is disabled and that no swap areas are reinstated on reboot. For example, type:</p><pre><code class="language-bash">sudo swapoff -a
</code></pre><p></p><p>Permanently disable swap by commenting out or deleting any swap areas in /etc/fstab.</p><p>Depending on the exact Linux system you installed, you may need to install additional packages. For example, with an RPM-based (Amazon Linux, CentOS, RHEL or Fedora), ensure that the <code>iproute-tc</code>, <code>socat</code>, and <code>conntrack-tools</code> packages are installed.</p><p>To optionally enable a firewall, run the following commands, including opening ports required by Kubernetes:</p><pre><code class="language-bash">sudo yum install firewalld -y
sudo systemctl start firewalld
sudo systemctl enable firewalld
sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp --add-port=2379-2380/tcp --add-port=10250-10252/tcp
</code></pre><p></p><h4 id="install-container-runtime-kata-containers-and-supporting-services">Install container runtime, Kata Containers, and supporting services</h4><p>Next we need to install a container runtime (<a href="https://containerd.io">containerd</a> in this example), Kata Containers, and the EKS-D versions of Kubernetes software components.</p><p>It&apos;s recommended that for production environments both the containerd runtime and Kata Containers are installed using official distribution packages. In this example we will utilise <a href="https://github.com/kata-containers/kata-containers/blob/main/utils/README.md">Kata Manager</a>, which will perform a scripted installation of both components:</p><pre><code class="language-bash">repo=&quot;github.com/kata-containers/tests&quot;
go get -d &quot;$repo&quot;
PATH=$PATH:$GOPATH/src/${repo}/cmd/kata-manager
kata-manager.sh install-packages
</code></pre><p></p><p>Once installed, update the system path to include Kata binaries:</p><pre><code class="language-bash">sudo su
PATH=$PATH:/opt/kata/bin/
echo &quot;PATH=$PATH:/opt/kata/bin/&quot; &gt;&gt; .profile
exit
</code></pre><p></p><p>Verify the host is capable of running Kata Containers:</p><pre><code class="language-bash">kata-runtime kata-check
</code></pre><p></p><p>Example output generated on a supported system will read as similar to the following:</p><pre><code class="language-bash">sudo kata-runtime kata-check
WARN[0000] Not running network checks as super user      arch=amd64 name= pid=9064 source=runtime
System is capable of running Kata Containers
System can currently create Kata Containers
</code></pre><p></p><h4 id="configure-container-runtime-for-kata">Configure container runtime for Kata</h4><p>cri is a native plugin of containerd 1.1 and above, and it&apos;s built into containerd and enabled by default. In order to configure containerd to schedule Kata containers, you need to update the containerd configuration file located at &#xA0;/etc/containerd/config.toml with the following configuration which includes three runtime classes:</p><ul><li><code>plugins.cri.containerd.runtimes.runc</code>: the runc, and it is the default runtime</li><li><code>plugins.cri.containerd.runtimes.kata</code>: The function in containerd (reference <a href="https://github.com/containerd/containerd/tree/master/runtime/v2#binary-naming">the document here</a>) where the dot-connected string <code>io.containerd.kata.v2</code> is translated to <code>containerd-shim-kata-v2</code> (i.e. the binary name of the Kata implementation of <a href="https://github.com/containerd/containerd/tree/master/runtime/v2">Containerd Runtime V2 (Shim API)</a>).</li><li><code>plugins.cri.containerd.runtimes.katacli</code>: the <code>containerd-shim-runc-v1</code> calls <code>kata-runtime</code>, which is the legacy process.</li></ul><p>Example config.toml:</p><pre><code class="language-bash">plugins.cri.containerd]
 no_pivot = false
plugins.cri.containerd.runtimes]
 [plugins.cri.containerd.runtimes.runc]
    runtime_type = &quot;io.containerd.runc.v1&quot;
    [plugins.cri.containerd.runtimes.runc.options]
      NoPivotRoot = false
      NoNewKeyring = false
      ShimCgroup = &quot;&quot;
      IoUid = 0
      IoGid = 0
      BinaryName = &quot;runc&quot;
      Root = &quot;&quot;
      CriuPath = &quot;&quot;
      SystemdCgroup = false
 [plugins.cri.containerd.runtimes.kata]
    runtime_type = &quot;io.containerd.kata.v2&quot;
[plugins.cri.containerd.runtimes.kata.options]
  ConfigPath = &quot;/opt/kata/share/defaults/kata-containers/configuration-qemu.toml&quot;
 [plugins.cri.containerd.runtimes.katacli]
    runtime_type = &quot;io.containerd.runc.v1&quot;
    [plugins.cri.containerd.runtimes.katacli.options]
      NoPivotRoot = false
      NoNewKeyring = false
      ShimCgroup = &quot;&quot;
      IoUid = 0
      IoGid = 0
      BinaryName = &quot;/opt/kata/bin/kata-runtime&quot;
      Root = &quot;&quot;
      CriuPath = &quot;&quot;
      SystemdCgroup = false
</code></pre><p></p><p>From the eksd-kata-containers repository, copy the file /config/config.toml to /etc/containerd/config.toml and restart containerd:</p><pre><code class="language-bash">sudo systemctl stop containerd
sudo cp eksd-kata-containers/config/config.toml /etc/containerd/
sudo systemctl start containerd
</code></pre><p></p><p>containerd is now able to run containers using the Kata Containers runtime.</p><h4 id="test-kata-with-containerd">Test Kata with containerd</h4><p>In order to test that containerd can successfully run a Kata container, a shell script named test-kata.sh has been provided in the script directory within the eksd-kata-containers repository.</p><p>test-kata.sh uses the ctr CLI util to pull and run a busybox image as a Kata container, and retrieves the kernel version from within the Kata VM. The script returns both the kernel version reported by busybox from within the Kata VM, as well as the host OS kernel version. Per the sample output, the container kernel (hosted in the VM) is different to the host OS kernel:</p><pre><code class="language-bash">chmod +x eksd-kata-containers/script/check-kata.sh
./eksd-kata-containers/script/check-kata.sh

Testing Kata Containers..

docker.io/library/busybox:latest:                                                 resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:ae39a6f5c07297d7ab64dbd4f82c77c874cc6a94cea29fdec309d0992574b4f7:    exists         |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:1ccc0a0ca577e5fb5a0bdf2150a1a9f842f47c8865e861fa0062c5d343eb8cac: exists         |++++++++++++++++++++++++++++++++++++++|
layer-sha256:f531cdc67389c92deac44e019e7a1b6fba90d1aaa58ae3e8192f0e0eed747152:    exists         |++++++++++++++++++++++++++++++++++++++|
config-sha256:388056c9a6838deea3792e8f00705b35b439cf57b3c9c2634fb4e95cfc896de6:   exists         |++++++++++++++++++++++++++++++++++++++|
elapsed: 2.0 s                                                                    total:   0.0 B (0.0 B/s)
unpacking linux/amd64 sha256:ae39a6f5c07297d7ab64dbd4f82c77c874cc6a94cea29fdec309d0992574b4f7...
done

Test successful:
  Host kernel version      : 4.14.225-169.362.amzn2.x86_64
  Container kernel version : 5.4.71
</code></pre><p></p><p>The sample containerd configuration file will direct Kata to use the QEMU/KVM hypervisor, per the <code>ConfigFile</code> directive on line 19. Configuration files for Cloud Hypervisor/KVM, and Firecracker/KVM are also installed with Kata Containers:</p><ul><li>Firecracker: <code>/opt/kata/share/defaults/kata-containers/configuration-fc.toml</code></li><li>Cloud Hypervisor: <code>/opt/kata/share/defaults/kata-containers/configuration-clh.toml</code></li></ul><p>To select an alternate hypervisor, update the ConfigFile directive and restart containerd.</p><h3 id="prepare-kubernetes-environment">Prepare Kubernetes environment</h3><p>Pull and retag the pause, coredns, and etcd containers (copy and paste as one line):</p><pre><code class="language-bash">sudo ctr image pull public.ecr.aws/eks-distro/kubernetes/pause:v1.18.9-eks-1-18-1;\
sudo ctr image pull public.ecr.aws/eks-distro/coredns/coredns:v1.7.0-eks-1-18-1; \
sudo ctr image pull public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.14-eks-1-18-1; \
sudo ctr image tag public.ecr.aws/eks-distro/kubernetes/pause:v1.18.9-eks-1-18-1 public.ecr.aws/eks-distro/kubernetes/pause:3.2; \
sudo ctr image tag public.ecr.aws/eks-distro/coredns/coredns:v1.7.0-eks-1-18-1 public.ecr.aws/eks-distro/kubernetes/coredns:1.6.7; \
sudo ctr image tag public.ecr.aws/eks-distro/etcd-io/etcd:v3.4.14-eks-1-18-1 public.ecr.aws/eks-distro/kubernetes/etcd:3.4.3-0
</code></pre><p></p><p>Add the RPM repository to Google cloud RPM packages for Kubernetes by creating the following /etc/yum.repos.d/kubernetes.repo file:</p><pre><code class="language-yaml">[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
</code></pre><p></p><p>Install the required Kubernetes packages:</p><pre><code class="language-bash">sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
</code></pre><p></p><p>Load the br_netfilter kernel module, and create /etc/modules-load.d/k8s.conf:</p><pre><code class="language-bash">echo br_netfilter | sudo tee /etc/modules-load.d/k8s.conf
sudo modprobe br_netfilter
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
</code></pre><p></p><p>Create the /var/lib/kubelet directory, then configure the /var/lib/kubelet/kubeadm-flags.env file:</p><pre><code class="language-bash">sudo su
mkdir -p /var/lib/kubelet
cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=&quot;--cgroup-driver=systemd &#x2014;network-plugin=cni &#x2014;Pod-infra-container-image=public.ecr.aws/eks-distro/kubernetes/pause:3.2&quot;
exit
</code></pre><p></p><p>Get compatible binaries for kubeadm, kubelet, and kubectl:</p><pre><code class="language-bash">cd /usr/bin
sudo rm kubelet kubeadm kubectl
sudo wget https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubelet; \
sudo wget https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubeadm; \
sudo wget https://distro.eks.amazonaws.com/kubernetes-1-18/releases/1/artifacts/kubernetes/v1.18.9/bin/linux/amd64/kubectl
sudo chmod +x kubeadm kubectl kubelet
</code></pre><p></p><p>Enable the kubelet service:</p><pre><code class="language-bash">sudo systemctl enable kubelet</code></pre><p></p><h4 id="configure-kube-yaml">Configure kube.yaml</h4><p>A sample kube.yaml file has been provided in the config directory within the eksd-kata-containers repository.</p><p>Update the sample kube.yaml by providing the values for variables surrounded by {{ and }} within the <code>localAPIEndpoint</code> and <code>nodeRegistration</code> sections:</p><pre><code class="language-yaml">apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: {{ primary_ip }}
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: {{ primary_hostname }}
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
</code></pre><p></p><h4 id="start-the-kubernetes-control-plane">Start the Kubernetes control-plane</h4><p>Run the <code>kubeadm init</code> command, identifying the config file as follows:</p><pre><code class="language-bash">sudo kubeadm init --config eksd-kata-containers/config/kube.yaml
...
[init] Using Kubernetes version: v1.18.9-eks-1-18-1
[preflight] Running pre-flight checks
...
[kubelet-finalize] Updating &quot;/etc/kubernetes/kubelet.conf&quot; to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!
</code></pre><p></p><p>Your Kubernetes cluster should now be up and running. The kubeadm output shows the exact commands to use to add nodes to the cluster. If something goes wrong, correct the problem and run kubeadm reset to prepare you system to run kubeadm init again.</p><h4 id="configure-the-cluster-to-schedule-kata-containers">Configure the cluster to schedule Kata Containers</h4><p>Configure the Kubernetes client locally:</p><pre><code class="language-bash">mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre><p></p><p>Deploy a Pod network to the cluster. For this example, we deploy a Weaveworks network:</p><pre><code class="language-bash">kubectl apply -f &quot;https://cloud.weave.works/k8s/net?k8s-version=v1.18.9-eks-1-18-1&quot;
...
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net (http://clusterrole.rbac.authorization.k8s.io/weave-net) created
clusterrolebinding.rbac.authorization.k8s.io/weave-net (http://clusterrolebinding.rbac.authorization.k8s.io/weave-net) created
role.rbac.authorization.k8s.io/weave-net (http://role.rbac.authorization.k8s.io/weave-net) created
rolebinding.rbac.authorization.k8s.io/weave-net (http://rolebinding.rbac.authorization.k8s.io/weave-net) created
daemonset.apps/weave-net created
You can also consider Calico or Cilium networks. Calico is popular because it can be used to propagate routes with BGP, which is often used on-prem.
</code></pre><p></p><p>If you are testing with a single node, untaint your master node:</p><pre><code class="language-bash">kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre><p></p><p>A sample runtimeclass.yaml file has been provided in the config directory within the eksd-kata-containers repository:</p><pre><code class="language-yaml">apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: kata
handler: kata
</code></pre><p></p><p>Create the <code>kata</code> RuntimeClass:</p><pre><code class="language-bash">kubectl apply -f eksd-kata-containers/config/runtimeclass.yaml
</code></pre><p></p><h3 id="schedule-kata-containers-with-kubernetes">Schedule Kata Containers with Kubernetes</h3><p>Sample Pod specs have been provided in the config directory within the eksd-kata-containers repository.</p><p>nginx-kata.yaml will schedule a pod within a VM using Kata Containers by specifying <code>kata</code> as the <code>runtimeClassName</code>:</p><pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: nginx-kata
spec:
  runtimeClassName: kata
  containers:
  - name: nginx
    image: nginx
</code></pre><p></p><p>nginx.yaml will schedule a pod using the default containerd runtime (runc):</p><pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
</code></pre><p></p><p>Schedule the Pods using kubectl:</p><pre><code class="language-bash">kubectl apply -f eksd-kata-containers/config/nginx-kata.yaml
kubectl apply -f eksd-kata-containers/config/nginx.yaml
</code></pre><p></p><p>You will now have x2 nginx pods running in the cluster, each using different container runtimes. To validate that the nginx-kata Pod has been scheduled inside a VM, exec into each container and retrieve the kernel version:</p><pre><code class="language-bash">kubectl exec -it nginx-kata -- bash -c &quot;uname -r&quot;
5.4.71
kubectl exec -it nginx -- bash -c &quot;uname -r&quot;
4.14.225-169.362.amzn2.x86_64
</code></pre><p></p><p>The nginx-kata Pod returns the kernel version reported by the kernel running inside the Kata VM, whereas the nginx Pod reports the kernel version of the host OS as it&apos;s running as a traditional runc container.</p><h2 id="conclusion">Conclusion</h2><p>The industry shift to containers presents unique challenges in securing user workloads within multi-tenant untrusted environments.</p><p>Kata Containers utilizes open source hypervisors as an isolation boundary for each container (or collection of containers in a pod); this approach solves the shared kernel dilemma with existing bare metal container solutions.</p><p>Combining Kata with EKS-D provides secure VM workload isolation on the same software that has enabled tens of thousands of Kubernetes clusters on Amazon EKS. This includes the latest upstream updates, as well as extended security patching support.</p><p>Photo by <a href="https://unsplash.com/@malegs?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Malena Gonzalez Serena</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[oke-autoscaler, a Kubernetes node autoscaler for OKE]]></title><description><![CDATA[oke-autoscaler is an open source Kubernetes node autoscaler for Oracle Container Engine for Kubernetes (OKE). The oke-autoscaler function provides an automated mechanism to scale OKE clusters by automatically adding or removing nodes from a node pool.]]></description><link>https://blog.bytequalia.com/oke-autoscaler-a-kubernetes-cluster-autoscaler-for-oke/</link><guid isPermaLink="false">5f645709755d720001c6d9a2</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Serverless]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Wed, 24 Mar 2021 09:20:16 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2021/03/pawel-czerwinski-HKHwdinroSo-unsplash--1--1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2021/03/pawel-czerwinski-HKHwdinroSo-unsplash--1--1.jpg" alt="oke-autoscaler, a Kubernetes node autoscaler for OKE"><p>oke-autoscaler is an open source Kubernetes node autoscaler for <a href="https://cloud.oracle.com/containers/kubernetes-engine" rel="nofollow">Oracle Container Engine for Kubernetes (OKE)</a>. The oke-autoscaler function provides an automated mechanism to scale OKE clusters by automatically adding or removing nodes from a node pool.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2021/03/image-2.png" class="kg-image" alt="oke-autoscaler, a Kubernetes node autoscaler for OKE" loading="lazy"></figure><p>When you enable the oke-autoscaler function, you don&apos;t need to manually add or remove nodes, or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic.</p><blockquote>The <a href="https://gitlab.com/byteQualia/oke-autoscaler">oke-autoscaler git repository</a> has everything you need to implement node autoscaling in your OKE clusters, and a comprehensive step-by-step work instruction.</blockquote><h2 id="introduction">Introduction</h2><p>Node autoscaling is one of the key features provided by the Kubernetes cluster. Node autoscaling provides the cluster with the ability to increase the number of nodes as the demand for service response increases, and decrease the number of nodes as demand decreases.</p><p>There are a number of different cluster autoscaler implementations currently available in the Kubernetes community. Of the available options (including the Kubernetes <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler">Cluster Autoscaler</a>, <a href="https://github.com/atlassian/escalator">Escalator</a>, <a href="https://keda.sh">KEDA</a>, and <a href="https://github.com/containership/cerebral">Cerebral</a>), the Kubernetes Cluster Autoscaler has been adopted as the &#x201C;de facto&#x201D; node autoscaling solution. It&#x2019;s maintained by SIG Autoscaling, and has documentation for most major cloud providers.<br><br>The Cluster Autoscaler follows a straightforward principle of reacting to &#x201C;unschedulable pods&#x201D; to trigger scale-up events. By leveraging a cluster-native metric, the Cluster Autoscaler offers a simplified implementation and a typically friction-less experience to getting up and running with node autoscaling.</p><p>The oke-autoscaler function implements the same approach to node autoscaling for OKE as the Cluster Autoscaler does for other Kubernetes implementations (i.e. using unschedulable pods as a signal). Read-on for a more detailed overview of how oke-autoscaler functions.</p><h2 id="about-oke-autoscaler">About oke-autoscaler</h2><p>The oke-autoscaler function automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:</p><ul><li>there are pods that have failed to run in the cluster due to insufficient resources;</li><li>nodes in the node pool have been underutilized for an extended period of time</li></ul><p>By default the oke-autoscaler function implements only the scale-up feature, scale-down is an optional feature.</p><p><em><a href="https://cloud.oracle.com/containers/kubernetes-engine" rel="nofollow noreferrer noopener">Oracle Container Engine for Kubernetes</a> (OKE) is a developer friendly, container-native, and enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle&#x2019;s Cloud Infrastructure.</em></p><p><em>OKE nodes are hosts in a Kubernetes cluster (individual machines: virtual or bare metal, provisioned as Kubernetes worker nodes). OKE nodes are deployed into logical groupings known as node pools. OKE nodes run containerized applications in deployment units known as pods. Pods schedule containers on nodes, which utilize resources such as CPU and RAM.</em></p><h2 id="overview">Overview</h2><p>oke-autoscaler is implemented as an Oracle Function (i.e. an OCI managed serverless function):</p><ul><li>the Oracle Function itself is written in Python: <a href="https://gitlab.com/byteQualia/oke-autoscaler/-/blob/master/oke-autoscaler/func.py">oke-autoscaler/func.py</a></li><li>the function uses a custom container image based on oraclelinux:7-slim, and also includes &#xA0;rh-python36, the OCI CLI, and kubectl: <a href="https://gitlab.com/byteQualia/oke-autoscaler/-/blob/master/oke-autoscaler/Dockerfile">oke-autoscaler/Dockerfile</a></li></ul><h3 id="evaluation-logic">Evaluation Logic</h3><p>The autoscaler function interacts with the Kubernetes API and a number of OCI control plane APIs to evaluate the state of the cluster, and the target node pool.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2021/10/oke-autoscaler-function-component-v0.01.png" class="kg-image" alt="oke-autoscaler, a Kubernetes node autoscaler for OKE" loading="lazy"></figure><p>When the function is invoked, it follows an order of operation as follows:</p><ol><li>evaluates the state of the node pool, to ensure that the node pool is in a stable condition</li><li>if the node pool is determined to be mutating (e.g. in the process of either adding or deleting a node, or stabilizing after adding a node), the autoscaler function will exit without performing any further operation</li><li>if the node pool is in a stable condition, scale-up is evaluated first</li><li>if scale-up is not triggered, the autoscaler function evaluates for scale-down (where scale-down has been enabled by the cluster administrator)</li></ol><h3 id="scheduling">Scheduling</h3><p>The oke-autoscaler function is designed to be invoked on a recurring schedule. The periodicity by which the function is scheduled to be invoked is configurable. As a starting point, consider scheduling the oke-autoscaler function to be invoked at an interval of every 3 minutes.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2021/03/image.png" class="kg-image" alt="oke-autoscaler, a Kubernetes node autoscaler for OKE" loading="lazy"></figure><p>Once invoked, the function will run in accordance with the Evaluation Logic described herein.</p><p>A dimension to consider is the length of the window of time that the function will use to calculate the average resource utilization in the node pool.</p><p>The cluster administrator assigns a value to the function custom configuration parameter <code>node_pool_eval_window</code>. The oke-autoscaler function implements <code>node_pool_eval_window</code> as the number of minutes over which to:</p><ul><li>calculate the average CPU &amp; RAM utilization for the node pool when evaluating for scale-down</li></ul><p><code>node_pool_eval_window</code> is by default is intended to represent the number of minutes between each invocation of the function, i.e. the periodicity of the function. In the case where the oke-autoscaler function is scheduled for invocation every 3 minutes, setting <code>node_pool_eval_window</code> to 3 minutes will configure the oke-autoscaler function to use the 3 minutes lapsed since the previous invocation as the window of time to evaluate and node pool utilization metrics.</p><p><code>node_pool_eval_window</code> does not need to match the function invocation schedule, and can be set to calculate resource utilization over either a longer or shorter window of time.</p><h3 id="scaling">Scaling</h3><p>The cluster administrator defines what the node pool maximum and minimum number of nodes should be. The oke-autoscaler function operates within these boundaries.</p><h4 id="scale-up">Scale-Up</h4><p>The oke-autoscaler function queries the cluster to enumerate the number of pods for the given node pool which are in a <code>Pending</code> state, and are flagged as being in an <code>Unschedulable</code> condition.</p><p>The number of pods flagged as being <code>Unschedulable</code> will be greater than zero when there are insufficient resources in the cluster on which to schedule pods. This condition will trigger the oke-autoscaler function to scale-up the node pool by adding an additional node.</p><blockquote><em>The scale-up feature is enabled by default.</em></blockquote><h4 id="stabilization">Stabilization</h4><p>During the period immediately after a new node becomes active in a node pool, the cluster may still report some pods as being <code>Unschedulable</code> - despite the fact that the cluster has introduced enough resource to support the desired state.</p><p>In order to prevent a premature scale-up event, the oke-autoscaler function implements a node pool stabilization window to fence off the node pool from any modification during the period immediately after the addition of a new node. The node pool stabilization widow is implemented to afford the cluster the required time to schedule the backlog of unschedulable pods to the new node.</p><p>Where the oke-autoscaler function is invoked during the stabilization window, and it detects the presence of <code>Unschedulable</code> pods - a scale-up event will not be triggered.</p><h4 id="scale-down">Scale-Down</h4><p>The cluster administrator defines as a percentage, the average node pool CPU and/or RAM utilization that below which the autoscaler function will scale-down the node pool by deleting a node.</p><p>The oke-autoscaler function calculates the node pool average CPU and RAM utilization as a mean score expressed as a percentage, which is derived by obtaining average utilization across all nodes in the node pool.</p><p>The calculated node pool averages for CPU and/or RAM utilization are evaluated against the percentage based thresholds provided by the cluster administrator.</p><p>If the node pool average CPU and/or RAM utilization is less than the thresholds provided by the cluster administrator, the node pool will scale-down.</p><p>Administrator defined scale-down thresholds can be set to evaluate for:</p><ul><li>node pool average CPU utilization</li><li>node pool average RAM utilization</li><li>either node pool average CPU or RAM utilization</li></ul><p>When a specified scale-down condition is met, the oke-autoscaler function will cordon and drain the worker node, then calls the Container Engine API to delete the node from the node pool.</p><blockquote><em>Enabling the scale-down feature is optional.</em></blockquote><h3 id="multiple-node-pools">Multiple Node Pools</h3><p>The oke-autoscaler function supports clusters which contain multiple node pools. For a given cluster hosting multiple node pools, oke-autoscaler functions can be enabled for one or more of the associated node pools.</p><h3 id="function-return-data">Function Return Data</h3><p>The function will return a JSON array containing summary data describing node pool status, any action and associated result.</p><p>Function completed successfully - no action performed as no resource pressure:</p><pre><code class="language-json">Result: { 
    &quot;success&quot;: { 
        &quot;action&quot;: &quot;none&quot;, 
        &quot;reason&quot;: &quot;no-resource-pressure&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;ready&quot;, 
        &quot;node-count&quot;: &quot;2.0&quot; 
    } 
} </code></pre><p>Function completed successfully - no action performed as node pool is stabilizing:</p><pre><code class="language-json">Result: { 
    &quot;success&quot;: { 
        &quot;action&quot;: &quot;none&quot;, 
        &quot;reason&quot;: &quot;node-pool-status&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;stabilizing&quot;, 
        &quot;unschedulable-pods-count&quot;: &quot;4.0&quot;,
        &quot;node-count&quot;: &quot;2.0&quot; 
    } 
} </code></pre><p>Function completed successfully - scale-up:</p><pre><code class="language-json">Result: { 
    &quot;success&quot;: { 
        &quot;action&quot;: &quot;scale-up&quot;, 
        &quot;reason&quot;: &quot;unschedulable-pods&quot;, 
        &quot;unschedulable-pods-count&quot;: &quot;10.0&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;ready&quot;, 
        &quot;node-count&quot;: &quot;3.0&quot; 
    } 
}</code></pre><p>Function completed with warning - scale-up, max node count limit reached:</p><pre><code class="language-json">Result: { 
    &quot;warning&quot;: { 
        &quot;action&quot;: &quot;none&quot;, 
        &quot;reason&quot;: &quot;node-max-limit-reached&quot;, 
        &quot;unschedulable-pods-count&quot;: &quot;10.0&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;ready&quot;, 
        &quot;node-count&quot;: &quot;3.0&quot; 
    } 
} </code></pre><p>Function completed successfully - scale-down, low CPU utilization:</p><pre><code class="language-json">Result: { 
    &quot;success&quot;: { 
        &quot;action&quot;: &quot;scale-down&quot;, 
        &quot;reason&quot;: &quot;cpu&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;ready&quot;, 
        &quot;node-count&quot;: &quot;3.0&quot; 
    } 
}</code></pre><p>Function failed - missing user input data:</p><pre><code class="language-json">Result: { 
    &quot;error&quot;: { 
        &quot;reason&quot;: &quot;missing-input-data&quot; 
    } 
}</code></pre><h3 id="function-log-data">Function Log Data</h3><p>The oke-autoscaler function has been configured to provide some basic logging regarding it&apos;s operation.<br>The following excerpt illustrates the function log data relating to a single oke-autoscaler function invocation:</p><pre><code class="language-bash">Node Lifecycle State:  
ACTIVE 
ACTIVE 
...
...
Node Data: { 
    &quot;availability_domain&quot;: &quot;xqTA:US-ASHBURN-AD-1&quot;, 
    &quot;fault_domain&quot;: &quot;FAULT-DOMAIN-2&quot;, 
    &quot;id&quot;: &quot;ocid1.instance.oc1.iad.anuwcljrp7nzmjiczjcacpmcg6lw7p2hlpk5oejlocl2qugqn3rxlqlymloq&quot;, 
    &quot;lifecycle_details&quot;: &quot;&quot;, 
    &quot;lifecycle_state&quot;: &quot;ACTIVE&quot;, 
    &quot;name&quot;: &quot;oke-c3wgoddmizd-nrwmmzzgy2t-sfcf3hk5x2a-1&quot;, 
    &quot;node_error&quot;: null, 
    &quot;node_pool_id&quot;: &quot;ocid1.nodepool.oc1.iad.aaaaaaaaae3tsyjtmq3tan3emyydszrqmyzdkodgmuzgcytbgnrwmmzzgy2t&quot;, 
    &quot;private_ip&quot;: &quot;10.0.0.92&quot;, 
    &quot;public_ip&quot;: &quot;193.122.162.73&quot;, 
    &quot;subnet_id&quot;: &quot;ocid1.subnet.oc1.iad.aaaaaaaavnsn6hq7ogwpkragmzrl52dwp6vofkxgj6pvbllxscfcf3hk5x2a&quot; 
} 
...
...
Nodes: { 
    &quot;0&quot;: { 
        &quot;name&quot;: &quot;10.0.0.75&quot;, 
        &quot;id&quot;: &quot;ocid1.instance.oc1.iad.anuwcljrp7nzmjicknuodt727iawkx32unhc2kn53zrbrw7fubxexsamkf7q&quot;, 
        &quot;created&quot;: &quot;2020-05-20T11:50:04.988000+00:00&quot;, 
        &quot;cpu_load&quot;: 2.3619126090991616, 
        &quot;ram_load&quot;: 15.663938512292285 
    }, 
    &quot;1&quot;: { 
        &quot;name&quot;: &quot;10.0.0.92&quot;, 
        &quot;id&quot;: &quot;ocid1.instance.oc1.iad.anuwcljrp7nzmjiczjcacpmcg6lw7p2hlpk5oejlocl2qugqn3rxlqlymloq&quot;, 
        &quot;created&quot;: &quot;2020-05-24T05:33:14.121000+00:00&quot;, 
        &quot;cpu_load&quot;: 3.01701506531393, 
        &quot;ram_load&quot;: 14.896256379084324 
    } 
} 
...
...
Result: { 
    &quot;success&quot;: { 
        &quot;action&quot;: &quot;none&quot;, 
        &quot;reason&quot;: &quot;no-resource-pressure&quot;, 
        &quot;node-pool-name&quot;: &quot;prod-pool1&quot;, 
        &quot;node-pool-status&quot;: &quot;ready&quot;, 
        &quot;node-count&quot;: &quot;2&quot; 
    } 
}</code></pre><h2 id="limitations">Limitations</h2><p>The oke-autoscaler function has the following limitations:</p><ul><li>This function should not be configured for invocation on a recurring schedule with an interval less than 2.5 minutes</li><li>The function scale-down feature is not designed for use in clusters scheduling stateful workloads that utilise Local PersistentVolumes</li><li>The function initiates a call to the Container Engine API <code>updateNodePool()</code> to scale, which can take several minutes to complete</li><li>The function cannot wait for the <code>updateNodePool()</code> operation to complete, the cluster administrator will need to monitor the success or failure of the operation outside of the function</li></ul><p>If resources are deleted or moved when autoscaling your node pool, workloads might experience transient disruption. For example, if the workload consists of a controller with a single replica, that replica&apos;s pod might be rescheduled onto a different node if its current node is deleted.</p><p>Before enabling the autoscaler function scale-down feature, workloads should be configured to tolerate potential disruption, or to ensure that critical pods are not interrupted.</p><h3 id="conclusion">Conclusion</h3><p>Node autoscaling is one of the key features provided by the Kubernetes cluster, and with the oke-autoscaler you can easily implement this functionality into an OKE cluster.</p><p>Head over to the oke-autoscaler <a href="https://gitlab.com/byteQualia/oke-autoscaler">code repository</a> for necessary code and a detailed work instruction to start gaining the benefits of Kubernetes node autoscaling in your OKE clusters.</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pawe&#x142; Czerwi&#x144;ski</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[ESP8266 - Personalised IoT id-badge in a Serverless Function]]></title><description><![CDATA[The Oracle Code Card is a Wi-Fi-enabled IoT device that's built around the ESP8266 microcontroller, and includes an e-paper display. This post describes a serverless function that transforms your Code Card into an awesome, personalized id-badge.]]></description><link>https://blog.bytequalia.com/esp8266-personalised-iot-id-badge/</link><guid isPermaLink="false">5f645376755d720001c6d96c</guid><category><![CDATA[IoT]]></category><category><![CDATA[Serverless]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Tue, 01 Sep 2020 04:46:00 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2021/09/jida-li-pfrGh-NEzX4-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2021/09/jida-li-pfrGh-NEzX4-unsplash.jpg" alt="ESP8266 - Personalised IoT id-badge in a Serverless Function"><p>The ESP8266 chip is a great candidate for IoT development. It provides built-in 2.4GHz WiFi capabilities, has a multitude of different general purpose I/O (GPIO) pins available, and can take advantage of the extensive libraries available in the larger Arduino community - in fact many different commercial IoT devices use the ESP8266 chip.</p><h2 id="introduction">Introduction</h2><p>The Oracle Code Card is a Wi-Fi-enabled IoT device that&apos;s built around the <a href="https://en.wikipedia.org/wiki/ESP8266" rel="nofollow noreferrer noopener">ESP8266</a> microcontroller, and includes an e-paper display.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/09/codecard-avatar-photo-v0.02.png" class="kg-image" alt="ESP8266 - Personalised IoT id-badge in a Serverless Function" loading="lazy"></figure><p>Oracle actually give these these away at their in-person events - amazing!</p><p>The Code Cards serve as a super-cool id-badge at the actual in-person events, and they also provide a platform for a bunch of hands-on learning and development exercises post event.</p><p>This post will outline a solution that I&apos;ve built specifically for the Code Card platform: it&apos;s a serverless function that transforms your Code Card into an awesome, personalized id-badge.</p><h2 id="overview">Overview</h2><p>The serverless function is designed to assemble and display a bitmap image which includes a unique card-owner identicon that&apos;s generated on-the-fly via a 3rd party API. Generation of the identicon avatar is based on a hash of the card owner&apos;s name.</p><p>Apart from turning your Code Card into a personalized id-badge, the avatar function is a great reference for building an Oracle function, which when invoked coordinates a number of interactions with a range of OCI services, external services, and the Code Card IoT device itself.</p><p>The avatar function is implemented as an Oracle Function (i.e. an OCI managed serverless function):</p><ul><li>the function is invoked via an API published via the OCI API Gateway Service</li><li>the Oracle Function itself is written in Python: <a href="https://blog.bytequalia.com/byteQualia/codecard-avatar/-/blob/master/codecard-avatar/func.py">codecard-avatar/func.py</a></li><li>the function uses a custom container image based on oraclelinux:7-slim, and also includes rh-python36 and ImageMagick: <a href="https://blog.bytequalia.com/byteQualia/codecard-avatar/-/blob/master/codecard-avatar/Dockerfile">codecard-avatar/Dockerfile</a></li></ul><h3 id="operation">Operation</h3><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/09/codecard-avatar-workflow-v0.01.png" class="kg-image" alt="ESP8266 - Personalised IoT id-badge in a Serverless Function" loading="lazy"></figure><p>In reference to the workflow illustration, there are two main elements to the workflow:</p><ol><li>During the &quot;Configure&quot; phase, the Code Card is configured using the Code Card Configurator mobile application (here the Code Card unique ID and owner&apos;s name are registered in a database table hosted on Oracle APEX)</li></ol><p>2. During the &quot;Run&quot; phase, the avatar function is then invoked by the Code Card via the API Gateway, which initiates a series of interactions with:</p><ul><li>the Code Card designer APEX backend</li><li>the identicon generation web service (<a href="http://identicon-1132.appspot.com" rel="nofollow noreferrer noopener">http://identicon-1132.appspot.com</a>)</li><li>OCI object storage</li></ul><p>Combining the gathered artefacts the function proceeds to assemble the id-badge custom bitmap using ImageMagick, and directs the Code Card to download and display the image via the object storage service.</p><h3 id="about-oracle-functions">About Oracle Functions</h3><p>Oracle Functions is a fully managed, highly scalable, on-demand, Functions-as-a-Service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. With Oracle Functions, you can deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.</p><p>Oracle Functions are &quot;container-native&quot;. This means that each function is a completely self-contained Docker image that is stored in your OCIR Docker Registry and pulled, deployed and invoked when you invoke your function.</p><h3 id="container-image">Container Image</h3><p>In order to make available each of the tools used during the functions operation (e.g. image manipulation, etc.) the avatar function is implemented using a BYO container image (see snippet following) - having this degree of latitude when working with Oracle functions comes in very handy.</p><pre><code class="language-docker">FROM oraclelinux:7-slim
ENV OCI_CLI_SUPPRESS_FILE_PERMISSIONS_WARNING=True
ENV PATH=&quot;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/rh/rh-python36/root/usr/bin:/opt/rh/rh-python36/root/usr/bin/oci:${PATH}&quot;
ENV LC_ALL=en_US.utf8
ENV LANG=en_US.utf8
ARG CLI_VERSION=2.10.3
RUN mkdir /oci-cli
RUN mkdir /function
ADD requirements.txt /function/
WORKDIR /function
RUN yum -y install oracle-release-el7 &amp;&amp; \
    yum -y install oracle-softwarecollection-release-el7 &amp;&amp; \
    yum-config-manager --enable software_collections &amp;&amp; \
    yum-config-manager --enable ol7_latest ol7_optional_latest ol7_addons &amp;&amp; \
    yum-config-manager --disable ol7_ociyum_config &amp;&amp; \
    yum -y install scl-utils &amp;&amp; \
    yum -y install rh-python36 &amp;&amp; \
    yum -y install gcc &amp;&amp; \
    yum -y install wget &amp;&amp; \
    yum -y install unzip &amp;&amp; \
    yum -y install jq &amp;&amp; \
    yum -y install ImageMagick &amp;&amp; \
    export PATH=$PATH:/opt/rh/rh-python36/root/usr/bin &amp;&amp; \
    rm -rf /var/cache/yum &amp;&amp; \
    pip3 install --no-cache --no-cache-dir -r requirements.txt &amp;&amp; rm -fr ~/.cache/pip /tmp* requirements.txt func.yaml Dockerfile .venv
WORKDIR /oci-cli
RUN wget -qO- -O oci-cli.zip &quot;https://github.com/oracle/oci-cli/releases/download/v${CLI_VERSION}/oci-cli-${CLI_VERSION}.zip&quot; &amp;&amp; \
    unzip -q oci-cli.zip -d .. &amp;&amp; \
    rm oci-cli.zip &amp;&amp; \
    pip3 install oci_cli-*-py2.py3-none-any.whl &amp;&amp; \
    yes | oci setup autocomplete &amp;&amp; \
    groupadd --gid 1000 fn &amp;&amp; \
    adduser --uid 1000 --gid fn fn
ADD . /function/
ENTRYPOINT [&quot;/opt/rh/rh-python36/root/usr/bin/fdk&quot;, &quot;/function/func.py&quot;, &quot;handler&quot;]</code></pre><p></p><p>By specifying the runtime as <code>docker</code> in the <a href="https://gitlab.com/byteQualia/codecard-avatar/-/blob/master/codecard-avatar/func.yaml">func.yaml</a> configuration file, the Fn CLI will build the custom container image as a part of the function deployment process. The <code>entrypoint</code> directive instructs the Python FDK to run the custom Code Card identicon script upon function invocation.</p><pre><code class="language-yaml">schema_version: 20180708
name: codecard-avatar
version: 0.0.1
runtime: docker
entrypoint: /python/bin/fdk /function/func.py handler
memory: 256</code></pre><p></p><h2 id="solution-work-instruction">Solution work instruction</h2><p>For a full work instruction, and access to the code - follow <a href="https://gitlab.com/byteQualia/codecard-avatar">this link</a> to the solution git repository.</p><p>For access to a range of other tools and demos built around the Code Card platform - <a href="https://github.com/cameronsenese/codecard">this link</a> to the Code Card git repository.</p><p>Photo by <a href="https://unsplash.com/@jida_leee?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Jida Li</a> on <a href="https://unsplash.com/?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Building a CLI utility for managing cloud services lifecycle using the OCI Go SDK]]></title><description><![CDATA[Oracle provide a number of SDKs for OCI. okectl is a CLI utility designed to automate cloud service lifecycle operations. This post provides an overview of the OCI Go SDK, and explores how okectl implements the Go SDK to orchestrate cloud services including Kubernetes clusters running on OKE.]]></description><link>https://blog.bytequalia.com/okectl/</link><guid isPermaLink="false">5e8aeeedb6bdac0001718b89</guid><category><![CDATA[Cloud Native]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Sat, 18 Apr 2020 05:57:10 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/04/pawel-czerwinski-wrzMveYicSA-unsplash-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2020/04/pawel-czerwinski-wrzMveYicSA-unsplash-1.jpg" alt="Building a CLI utility for managing cloud services lifecycle using the OCI Go SDK"><p>okectl is an open source CLI utility designed for use with <a href="https://cloud.oracle.com/containers/kubernetes-engine" rel="nofollow">Oracle Container Engine for Kubernetes (OKE)</a>. okectl provides a command-line interface for management of OKE and associated resources, including Kubernetes cluster lifecycle.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/pawel-czerwinski-wrzMveYicSA-unsplash.jpg" class="kg-image" alt="Building a CLI utility for managing cloud services lifecycle using the OCI Go SDK" loading="lazy"></figure><h2 id="introduction">Introduction</h2><p>okectl is designed as a stand-alone tool to automate operations such as the Kubernetes cluster creation process, and is typically best used as part of an automation pipeline.</p><p>Aside from being a useful automation tool, it&apos;s also a useful example of a practical application development scenario that leverages an OCI Software Development Kit (SDK).</p><p>In this blog post I&apos;ll provide an overview of the OCI Go SDK, as well as an overview of okectl, and how it&apos;s built.</p><h2 id="about-okectl">About okectl</h2><p>okectl is built using the <a href="https://github.com/oracle/oci-go-sdk">Go SDK</a> for Oracle Cloud Infrastructure (OCI).</p><h3 id="supported-operations">Supported Operations</h3><ul><li><code>--createOkeCluster</code><br><em>Creates Kubernetes cluster control plane, node pool, worker nodes, &amp; configuration data (kubeconfig &amp; JSON cluster desctiption).</em></li><li><code>--deleteOkeCluster</code><br><em>Deletes specified cluster.</em></li><li><code>--getOkeNodePool</code><br><em>Retreives cluster, node pool, and node details for a specified node pool.</em></li><li><code>--createOkeKubeconfig</code><br><em>Creates kubeconfig authentication artefact for kubectl.</em></li></ul><h3 id="interesting-features">Interesting Features</h3><p>As a little context, I created okectl a while back - right at the time when the OKE service had just been released.</p><p>I needed a means to end-end provision a Kubernetes cluster to OCI entirely <em>as code.</em> Terraform provided the ability to create all of OKE&apos;s OCI related dependencies; including VCN, subnets, load balancers, etc, but at the time lacked support for the OKE service.</p><p>I created okectl to solve for this gap. okectl was created to work in tandem with Terraform; that is, to be executed by Terraform as a <code>local-exec</code> operation. With okectl, a single Terraform configuration could be composed to first create the OCI resources necessary to support an OKE cluster, then in-turn run okectl to build the cluster.</p><p>With this use-case in mind, I also built some interesting features into okectl to provide Terraform with the ability to perform automated, remote software installation to OKE clusters - again, as a part of a single Terraform configuration (in this case, I leverage Terraform <code>remote-exec</code> and <a href="https://helm.sh">Helm</a>).</p><blockquote><em>Note: The <a href="https://www.terraform.io/docs/providers/oci/index.html">OCI Terraform provider</a> now supports OKE, including operations such as <a href="https://www.terraform.io/docs/providers/oci/r/containerengine_cluster.html">cluster lifecycle management</a>.</em></blockquote><h3 id="-waitnodesactive"><code>--waitNodesActive</code></h3><p>By default, OKE will report the status of a worker node pool as <code>ACTIVE</code> when the node pool entity itself is created - however this precedes the instantiation of the worker nodes themselves. It will typically be some minutes after the node pool is <code>ACTIVE</code> that the worker nodes in the pool will have been instantiated, software installation completed, and the worker nodes themselves reach the status of being <code>ACTIVE</code> - thus ready to run containers in support of the cluster.</p><p><code>--getOkeNodePool</code> retrieves cluster, node pool, and node details for a specified node pool, and also provides the ability to wait for nodes in a given node pool to become fully active, via the flag <code>--waitNodesActive</code>.</p><p>This is handy when used as part of automation pipeline, where any next action is dependent on having cluster worker nodes fully provisioned and active. e.g. use when Terraforming to pause all operations until worker nodes are active and ready for further configuration or software installation. (See flag <code>--tfExrernalDS</code> below for further detail on providing worker node IP address to Terraform).</p><p><code>--waitNodesActive</code> has three modes of operation:</p><ul><li><code>--waitNodesActive=false</code><br>okectl will not wait &amp; will return when the nominated node pool is <code>ACTIVE</code>, regardless of the state of nodes within the pool. </li><li><code>--waitNodesActive=any</code><br>okectl will wait &amp; return when the nominated node pool is <code>ACTIVE</code>, and any of the nodes in the nominated node pool return as <code>ACTIVE</code>.</li><li><code>--waitNodesActive=all</code><br>okectl will wait &amp; return when the nominated node pool is <code>ACTIVE</code>, and all of the nodes in the nominated node pool return as <code>ACTIVE</code>.</li></ul><p>okectl implements <code>--waitNodesActive</code> by enumerating the lifecycle-state of nodes in a given node pool via the <code>NodeLifecycleStateEnum</code> enumerator within the OCI Go SDK.</p><p>Node lifecycle state is exposed via the SDK (ContainerEngineClient) <code>GetNodePool</code> function, which in the following example is providing data to okectl&apos;s <code>getNodeLifecycleState</code> function:</p><pre><code class="language-go">// get worker node lifecycle status..
func getNodeLifeCycleState(
	ctx context.Context,
	client containerengine.ContainerEngineClient,
	nodePoolId string) containerengine.GetNodePoolResponse {

	req := containerengine.GetNodePoolRequest{}
	req.NodePoolId = common.String(nodePoolId)

	resp, err := client.GetNodePool(ctx, req)
	helpers.FatalIfError(err)

	// marshal &amp; parse json..
	nodePoolResp := resp.NodePool
	nodesJson, _ := json.Marshal(nodePoolResp)
	jsonParsed, _ := gabs.ParseJSON(nodesJson)
	nodeLifeCycleState = (jsonParsed.Path(&quot;nodes.lifecycleState&quot;).String())

	return resp
}</code></pre><p></p><h3 id="-tfexternalds"><code>--tfExternalDs</code></h3><p>Where the <code>--getOkeNodePool</code> flag <code>--tfExternalDs=true</code> is used, okectl will run as a Terraform external data source.</p><p>The Terraform external data source allows an external program implementing a specific protocol to act as a data source, exposing arbitrary data to Terraform for use elsewhere in the Terraform configuration.</p><p>In this circumstance, okectl provides a JSON response containing the public IP address of a worker node in a format compatible with the Terraform external data source specification:</p><pre><code class="language-bash">./okectl getOkeNodePool --tfExternalDs=true
{&quot;workerNodeIp&quot;:&quot;132.145.156.184&quot;}</code></pre><p></p><p>In combination with the <code>--waitNodesActive</code> flag, this provides the ability to have Terraform first wait for worker nodes in a new node pool to become active, then obtain the public IP address of a worker node from okectl. With the public IP address of the worker node, Terraform can proceed to call a <code>remote-exec</code> provisioner to then perform operations such as cluster configuration, and perform application workload deployment.</p><h2 id="about-the-oci-software-development-kits">About the OCI Software Development Kits</h2><p>Oracle Cloud Infrastructure provides a number of SDKs to facilitate development of custom solutions.</p><p>The OCI SDKs are designed to streamline the process of building and deploying applications that integrate with Oracle Cloud Infrastructure services.</p><p>Each SDK provides the tools you need to develop an application, including code samples and documentation to create, test, and troubleshoot solutions.</p><blockquote>If you want to contribute to the development of the SDKs, they are all open source and <a href="https://github.com/oracle">available on GitHub</a>.</blockquote><p>At present Oracle offer SDKs for <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/javasdk.htm">Java</a>, <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/pythonsdk.htm">Python</a>, <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/rubysdk.htm">Ruby</a>, &amp; <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/gosdk.htm">Go</a>.</p><h3 id="oci-rest-apis-sdks">OCI REST APIs &amp; SDKs</h3><p>Generally speaking, the Oracle Cloud Infrastructure APIs are typical REST APIs that adopt the following characteristics:</p><ul><li>The Oracle Cloud Infrastructure APIs use standard HTTP requests and responses.</li><li>All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2.</li><li>All Oracle Cloud Infrastructure API requests must be signed for authentication purposes.</li></ul><p>Further detail regarding the OCI REST APIs can be found <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/usingapi.htm">here</a>, including the <a href="https://docs.cloud.oracle.com/en-us/iaas/api/#/en/containerengine/20180222/">API for the Container Engine for Kubernetes</a> service - which can be used to build, deploy, and manage OKE clusters.</p><p>Whilst it&apos;s possible to to program directly against the OCI REST APIs, the OCI SDKs provide a lot of pre-built functionality, and abstract away much of the complexity required when interacting directly with APIs - for example, creating authorisation signatures, and parsing responses.</p><blockquote>In addition to the SDKs, Oracle also provide the <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/cliconcepts.htm">OCI CLI</a> and the <a href="https://www.terraform.io/docs/providers/oci/index.html">OCI Terraform provider</a> as additional options for a more streamlined experience when developing with the OCI REST API. </blockquote><h3 id="oci-go-sdk">OCI Go SDK</h3><p>The OCI Go SDK contains the following components:</p><ul><li><strong>Service packages</strong>: All packages except <code>common</code> and any other package found inside <code>cmd</code>. These packages represent the Oracle Cloud Infrastructure services supported by the Go SDK. Each package represents a service. These packages include methods to interact with the service, structs that model input and output parameters, and a client struct that acts as receiver for the above methods.</li><li><strong>Common package</strong>: Found in the <code>common</code> directory. The common package provides supporting functions and structs used by service packages. Includes HTTP request/response (de)serialization, request signing, JSON parsing, pointer to reference and other helper functions. Most of the functions in this package are meant to be used by the service packages.</li><li><strong>cmd</strong>: Internal tools used by the <code>oci-go-sdk</code>.</li></ul><p>The Go SDK also provides a <a href="https://github.com/oracle/oci-go-sdk/tree/master/example">broad range of examples</a> for programming with many of the available OCI services, including the core services (compute, network, etc.), identity and access management, database, email, DNS, and more. </p><p>Full documentation can be found on the GoDocs site <a href="https://godoc.org/github.com/oracle/oci-go-sdk">here</a>.</p><p>To start working with the Go SDK, you need to import the service packages that serve your requirements, create a client, and then proceed to use the client to make calls.</p><p>okectl utilises the <code>common</code>, <code>containerengine</code>, and <code>helpers</code> packages from the OCI Go SDK:</p><pre><code class="language-go">// import libraries..
import (
	&quot;context&quot;
	&quot;encoding/json&quot;
	&quot;fmt&quot;
	&quot;io&quot;
	&quot;io/ioutil&quot;
	&quot;os&quot;
	&quot;path/filepath&quot;
	&quot;regexp&quot;
	&quot;strings&quot;
	&quot;time&quot;

	&quot;github.com/Jeffail/gabs&quot;
	&quot;gopkg.in/alecthomas/kingpin.v2&quot;
	&quot;github.com/oracle/oci-go-sdk/common&quot;
	&quot;github.com/oracle/oci-go-sdk/containerengine&quot;
	&quot;github.com/oracle/oci-go-sdk/example/helpers&quot;
)</code></pre><p></p><p>The <code>containerengine</code> package is provided to simplify much of the heavy lifting associated with the orchestration of the OKE service. okectl leverages the <code>containerengine</code> package to create/destroy clusters, cluster node pools, and kubeconfig artefacts.</p><p>Before using the SDK to interact with a service, we first call the <code>common.DefaultConfigProvider()</code> function to provide necessary configuration and authentication data. See the <a href="## Authentication">Authentication</a> section for information on creating a configuration file.</p><pre><code class="language-go">config := common.DefaultConfigProvider()
client, err := identity.NewContainerEngineClientWithConfigurationProvider(config)
if err != nil { 
     panic(err)
}</code></pre><p></p><p>After successfully creating a client, requests can now be made to the service. Generally, all functions associated with an operation accept <a href="https://golang.org/pkg/context/" rel="nofollow"><code>context.Context</code></a> and a struct that wraps all input parameters. The functions then return a response struct that contains the desired data, and an error struct that describes the error if an error occurs:</p><pre><code class="language-Go">// create cluster..
func createCluster(
	ctx context.Context,
	client containerengine.ContainerEngineClient,
	clusterName, vcnId, compartmentId, kubeVersion, subnet1Id, subnet2Id string) containerengine.CreateClusterResponse {

	req := containerengine.CreateClusterRequest{}
	req.Name = common.String(clusterName)
	req.CompartmentId = common.String(compartmentId)
	req.VcnId = common.String(vcnId)
	req.KubernetesVersion = common.String(kubeVersion)
	req.Options = &amp;containerengine.ClusterCreateOptions{
		ServiceLbSubnetIds: []string{subnet1Id, subnet2Id},
		AddOns: &amp;containerengine.AddOnOptions{
			IsKubernetesDashboardEnabled: common.Bool(true),
			IsTillerEnabled:              common.Bool(true),
		},
	}

	fmt.Println(&quot;OKECTL :: Create Cluster :: Submitted ...&quot;)
	resp, err := client.CreateCluster(ctx, req)
	helpers.FatalIfError(err)

	return resp
}</code></pre><p></p><p>OCI applies throttling to many API requests to prevent accidental or abusive use of resources. If you make too many requests too quickly, you might see some succeed and others fail. Oracle recommends that you implement an exponential back-off, starting from a few seconds to a maximum of 60 seconds.</p><p>The <code><a href="https://blog.bytequalia.com/p/3fcf0915-211a-4ab7-bdae-09e2737f7ad4/github.com/oracle/oci-go-sdk/example/helpers">helpers</a></code> package implements a number of features, including an exponential retry backoff mechanism that&apos;s used to rate limit API polling.</p><p>okectl uses this to gracefully wait for operations to complete, such as cluster or node-pool creation: </p><pre><code class="language-go">// wait for create cluster completion..
workReqRespCls := waitUntilWorkRequestComplete(c, createClusterResp.OpcWorkRequestId)
fmt.Println(&quot;OKECTL :: Create Cluster :: Complete ...&quot;)
clusterId := getResourceID(workReqRespCls.Resources, containerengine.WorkRequestResourceActionTypeCreated, &quot;CLUSTER&quot;)</code></pre><p></p><h3 id="installation">Installation</h3><p>Installing the Go SDK is simple, use the <code>go get</code> command to download the package (and any dependencies), and automatically install:</p><pre><code class="language-bash">go get -u github.com/oracle/oci-go-sdk</code></pre><p></p><h3 id="authentication">Authentication</h3><p>Oracle Cloud Infrastructure SDKs require basic configuration information, like user credentials and tenancy OCID. You can provide this information by:</p><ul><li>Using a configuration file</li><li>Declaring a configuration at runtime</li></ul><p>See the <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm">following overview</a> for information on how to create a configuration file.</p><p>To declare a configuration at runtime, implement the <code>ConfigurationProvider</code> interface shown below:</p><pre><code class="language-go">// ConfigurationProvider wraps information about the account ownertype ConfigurationProvider interface {
    KeyProvider
    TenancyOCID() (string, error)
    UserOCID() (string, error)
    KeyFingerprint() (string, error)
    Region() (string, error)
}</code></pre><p></p><h3 id="debug">Debug</h3><p>The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw HTTP requests, responses and potential errors when (un)marshalling request and responses.</p><p>Built-in logging in the SDK is controlled via the environment variable <code>OCI_GO_SDK_DEBUG</code> and its contents.</p><p>The below are possible values for the <code>OCI_GO_SDK_DEBUG</code> variable:</p><ul><li><code>info</code> or <code>i</code> enables all info logging messages</li><li><code>debug</code> or <code>d</code> enables all debug and info logging messages</li><li><code>verbose</code> or <code>v</code> or <code>1</code> enables all verbose, debug and info logging messages</li><li><code>null</code> turns all logging messages off</li></ul><p>For example:</p><pre><code class="language-bash">OCI_GO_SDK_DEBUG=1</code></pre><p></p><h2 id="building-okectl-from-source">Building okectl from source</h2><h3 id="dependencies">Dependencies</h3><ul><li>Install the <a href="https://golang.org/dl/" rel="nofollow">Go programming language</a></li><li>Install the <a href="https://github.com/oracle/oci-go-sdk">Go SDK for Oracle Cloud Infrastructure</a></li></ul><p>After installing Go and the OCI Go SDK, clone the <code>okectl</code> repository:</p><pre><code class="language-bash">git clone https://gitlab.com/byteQualia/okectl.git</code></pre><p></p><p>Commands from this point forward will assume that you are in the <code>../okectl</code> directory.</p><h3 id="build">Build</h3><p>Build an okectl Linux compatible binary as follows:</p><pre><code class="language-bash">GOOS=linux
GOARCH=amd64
go build -v okectl.go</code></pre><p></p><p>Further information and pre-built binaries can be found at the <a href="https://gitlab.com/byteQualia/okectl">okectl repository</a>.</p><h2 id="using-okectl">Using okectl</h2><p><a href="https://gitlab.com/byteQualia/oke-ctl.git">okec</a>tl requires configuration data via command-line arguments &amp; associated flags. Command-line flags provide data relating to both the <a href="https://cloud.oracle.com/en_US/cloud-infrastructure" rel="nofollow">OCI</a> tenancy, and also OKE cluster configuration parameters.</p><p><a href="https://gitlab.com/byteQualia/oke-ctl.git">okec</a>tl implements <a href="https://github.com/alecthomas/kingpin">Kingpin</a> to manage command line and flag parsing. I chose Kingpin for this project as <a href="http://en.wikipedia.org/wiki/Fluent_interface" rel="nofollow">i</a>t&apos;s a type-safe command-line parser that provides straight-forward support for flags, nested commands, and positional arguments.</p><blockquote>The following are a subset of the usage examples that are available at the <a href="https://gitlab.com/byteQualia/okectl">okectl repository</a>. For further examples, head over there..</blockquote><h3 id="example-usage">Example - Usage</h3><pre><code class="language-bash">./okectl
usage: OKECTL [&lt;flags&gt;] &lt;command&gt; [&lt;args&gt; ...]

A command-line application for configuring Oracle OKE (Container Engine for Kubernetes.)

Flags:
  --help                 Show context-sensitive help (also try --help-long and --help-man).
  --configDir=&quot;.okectl&quot;  Path where output files are created - e.g. kubeconfig file.
  --version              Show application version.

Commands:
  help [&lt;command&gt;...]
    Show help.

  createOkeCluster --vcnId=VCNID --compartmentId=COMPARTMENTID --subnet1Id=SUBNET1ID --subnet2Id=SUBNET2ID --subnet3Id=SUBNET3ID [&lt;flags&gt;]
    Create new OKE Kubernetes cluster.

  deleteOkeCluster --clusterId=CLUSTERID
    Delete OKE Kubernetes cluster.

  getOkeNodePool [&lt;flags&gt;]
    Get cluster, node poool, and node details for a specified node pool.

  createOkeKubeconfig --clusterId=CLUSTERID
    Create kubeconfig authentication artefact for kubectl.</code></pre><p></p><h3 id="example-create-kubernetes-cluster">Example - Create Kubernetes Cluster</h3><h4 id="interactive-help">Interactive Help</h4><pre><code class="language-bash">./okectl createOkeCluster --help

usage: OKECTL createOkeCluster --vcnId=VCNID --compartmentId=COMPARTMENTID --subnet1Id=SUBNET1ID --subnet2Id=SUBNET2ID --subnet3Id=SUBNET3ID [&lt;flags&gt;]

Create new OKE Kubernetes cluster.

Flags:
  --help                              Show context-sensitive help (also try --help-long and --help-man).
  --configDir=&quot;.okectl&quot;               Path where output files are created - e.g. kubeconfig file. Specify as absolute path.
  --version                           Show application version.
  --vcnId=VCNID                       OCI VCN-Id where cluster will be created.
  --compartmentId=COMPARTMENTID       OCI Compartment-Id where cluster will be created.
  --subnet1Id=SUBNET1ID               Cluster Control Plane LB Subnet 1.
  --subnet2Id=SUBNET2ID               Cluster Control Plane LB Subnet 2.
  --subnet3Id=SUBNET3ID               Worker Node Subnet 1.
  --subnet4Id=SUBNET4ID               Worker Node Subnet 2.
  --subnet5Id=SUBNET5ID               Worker Node Subnet 3.
  --clusterName=&quot;dev-oke-001&quot;         Kubernetes cluster name.
  --kubeVersion=&quot;v1.10.3&quot;             Kubernetes cluster version.
  --nodeImageName=&quot;Oracle-Linux-7.4&quot;  OS image used for Worker Node(s).
  --nodeShape=&quot;VM.Standard1.1&quot;        CPU/RAM allocated to Worker Node(s).
  --nodeSshKey=NODESSHKEY             SSH key to provision to Worker Node(s) for remote access.
  --quantityWkrSubnets=1              Number of subnets used to host Worker Node(s).
  --quantityPerSubnet=1               Number of Worker Nodes per subnet.
  --waitNodesActive=&quot;false&quot;           If waitNodesActive=all, wait &amp; return when all nodes in the pool are active.
                                      If waitNodesActive=any, wait &amp; return when any of the nodes in the pool are active.
                                      If waitNodesActive=false, no wait &amp; return when the node pool is active.</code></pre><p></p><h4 id="create-cluster">Create Cluster</h4><pre><code class="language-bash">./okectl createOkeCluster \
--clusterName=OKE-Cluster-001 \
--kubernetesVersion=v1.10.3 \
--vcnId=ocid1.vcn.oc1.iad.aaaaaaaamg7tqzjpxbbibev7lhp3bhgtcmgkbbrxr7td4if5qa64bbekdxqa \
--compartmentId=ocid1.compartment.oc1..aaaaaaaa2id6dilongtlxxmufoeunasaxuv76xxcb4ewxcxxxw5eba \
--quantityWkrSubnets=1 \
--quantityPerSubnet=1 \
--subnet1Id=ocid1.subnet.oc1.iad.aaaaaaaagq5apzuwr2qnianczzie4ffo6t46rcjehnsyoymiuunxaauq7y7a \
--subnet2Id=ocid1.subnet.oc1.iad.aaaaaaaadxr6zl4jpmcaxd4izzlvbyq2pqss3pmotx6dnusmh3ijorrpbhva \
--subnet3Id=ocid1.subnet.oc1.iad.aaaaaaaabf6k3ufcjdsdb5xfzzc3ayplhpip2jxtnaqvfcpakxt3bhmhecxa \
--nodeImageName=Oracle-Linux-7.4 \
--nodeShape=VM.Standard1.1 \
--nodeSshKey=&quot;ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsHX7RR0z+JSAf+5nfTO9kS4Y6HV2pPXoXTqUJH...&quot; \
--waitNodesActive=&quot;all&quot;</code></pre><p></p><p>For the above <code>createOkeCluster</code> request, okectl will provision:</p><ul><li>Kubernetes Cluster (Control Plane) - Version will be as nominated via the <code>--kubeVersion</code> flag.</li><li>Node Pool - Node Pool will be created across the number of worker subnets as provided via <code>--quantityWkrSubnets</code> flag.</li><li>Nodes - Worker nodes will be provisioned to each of the nominated worker subnets. Number of worker nodes per subnet is determined by the <code>--quantityPerSubnet</code> flag.</li><li>Configuration Data - Provision to local filesystem a kubeconfig authentication artefact (kubeconfig) &amp; json description of cluster configuration (nodeconfig.json).</li></ul><p>Per the flag --waitNodesActive=&quot;all&quot;, okectl will return when cluster, node pool, and each of the nodes in the node pool are active.</p><p>Once completed, okectl will output the cluster, nodepool and node configuration data (stdout):</p><pre><code class="language-bash">OKECTL :: Create Cluster :: Complete ...
--------------------------------------------------------------------------------------
{
       &quot;id&quot;: &quot;ocid1.nodepool.oc1.iad.aaaaaaaaae3tonjqgftdiyrxha2gczrtgu3winbtgbsdszjqmnrdeodegu2t&quot;,
       &quot;compartmentId&quot;: &quot;ocid1.compartment.oc1..aaaaaaaa2id6dilongtl6fmufoeunasaxuv76b6cb4ewxcw4juafe55w5eba&quot;,
       &quot;clusterId&quot;: &quot;ocid1.cluster.oc1.iad.aaaaaaaaae2tgnlbmzrtknjygrrwmobsmvrwgnrsmnqtmzjygc2domtbgmyt&quot;,
       &quot;name&quot;: &quot;oke-dev-001&quot;,
       &quot;kubernetesVersion&quot;: &quot;v1.10.3&quot;,
       &quot;nodeImageId&quot;: &quot;ocid1.image.oc1.iad.aaaaaaaajlw3xfie2t5t52uegyhiq2npx7bqyu4uvi2zyu3w3mqayc2bxmaa&quot;,
       &quot;nodeImageName&quot;: &quot;Oracle-Linux-7.4&quot;,
       &quot;nodeShape&quot;: &quot;VM.Standard1.1&quot;,
       &quot;initialNodeLabels&quot;: [],
       &quot;sshPublicKey&quot;: &quot;&quot;,
       &quot;quantityPerSubnet&quot;: 1,
       &quot;subnetIds&quot;: [
               &quot;ocid1.subnet.oc1.iad.aaaaaaaajvfrxxawuwhvxnjliox7gzibonafqcyjkdozwie7q5po7qbawl4a&quot;
       ],
       &quot;nodes&quot;: [
           {
              &quot;id&quot;: &quot;ocid1.instance.oc1.iad.abuwcljtayee6h7ttavqngewglsbe3b6my3n2eoqawhttgtswsu66lrjgi4q&quot;,
              &quot;name&quot;: &quot;oke-c2domtbgmyt-nrdeodegu2t-soxdncj6x5a-0&quot;,
              &quot;availabilityDomain&quot;: &quot;Ppri:US-ASHBURN-AD-3&quot;,
              &quot;subnetId&quot;: &quot;ocid1.subnet.oc1.iad.aaaaaaaattodyph6wco6cmusyza4kyz3naftwf6yjzvog5h2g6oxdncj6x5a&quot;,
              &quot;nodePoolId&quot;: &quot;ocid1.nodepool.oc1.iad.aaaaaaaaae3tonjqgftdiyrxha2gczrtgu3winbtgbsdszjqmnrdeodegu2t&quot;,
              &quot;publicIp&quot;: &quot;100.211.162.17&quot;,
              &quot;nodeError&quot;: null,
              &quot;lifecycleState&quot;: &quot;UPDATING&quot;,
              &quot;lifecycleDetails&quot;: &quot;waiting for running compute instance&quot;
           }
      ]
 }</code></pre><p></p><p>By default, okectl will create a sub-directory named &quot;.okectl&quot; within the same directory as the okectl binary. okectl will create x2 files within the &quot;.okectl&quot; directory:</p><ul><li><code>kubeconfig</code> - This file contains authentication and cluster connection information. It should be used with the <code>kubectl</code> command-line utility to access and configure the cluster.</li><li><code>nodepool.json</code> - This file contains a detailed output of the cluster and node pool configuration in json format.</li></ul><p>Output directory is configurable via the <code>--configDir</code> flag. Path provided to <code>--configDir</code> should be provided as an absolute path.</p><p>All clusters created using okectl will be provisioned with the additional options of the Kubernetes dashboard &amp; Helm/Tiller as installed.</p><h3 id="conclusion">Conclusion</h3><p>Head over to the <a href="https://gitlab.com/byteQualia/okectl">okectl repository</a> for further information on accessing a cluster, and performing cluster operations using <code>kubectl</code> via CLI or accessing the Kubernetes dashboard.</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pawe&#x142; Czerwi&#x144;ski</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Deploying Grafeas to Oracle OKE (Container Engine for Kubernetes)]]></title><description><![CDATA[Grafeas is an open artifact metadata API designed to help audit and govern your software supply chain. Tracking Grafeas’ metadata can give you confidence about what containers are in your environment, and also provides the ability to enforce restrictions on which containers get deployed.]]></description><link>https://blog.bytequalia.com/running-grafeas-on-oracle-oke-container-engine-for-kubernetes/</link><guid isPermaLink="false">5e87d81fb6bdac0001718aa5</guid><category><![CDATA[Cloud Native]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Sat, 04 Apr 2020 05:11:37 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/04/mitchell-luo-aIlpAEaW7S4-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://blog.bytequalia.com/content/images/2020/04/mitchell-luo-aIlpAEaW7S4-unsplash.jpg" alt="Deploying Grafeas to Oracle OKE (Container Engine for Kubernetes)"><p>Welcome to this introduction to Grafeas!</p><p>First I&apos;ll introduce you to Grafeas, an open artifact metadata API designed to help audit and govern your software supply chain. Then I&apos;ll take you through a Grafeas deplyment scenario on Oracle Container Engine for Kubernetes (OKE), whereby a Kubernetes validating admission controller is configured to integrate with Grafeas - providing a mechanism to make real-time, policy-based decisioning about whether pods are authorised to run on the cluster.</p><blockquote><em>This content is syndicated from <a href="https://gitlab.com/byteQualia/oke-grafeas-tutorial">this repository</a> - references to code snippets or files herein can be found over there.</em></blockquote><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/mitchell-luo-aIlpAEaW7S4-unsplash-1.jpg" class="kg-image" alt="Deploying Grafeas to Oracle OKE (Container Engine for Kubernetes)" loading="lazy"></figure><h2 id="about-grafeas">About Grafeas</h2><p>At each stage of the software supply chain (code, build, test, deploy, and operate), different tools generate metadata about various software components. Examples include the identity of the developer, when the code was checked in and built, what vulnerabilities were detected, what tests were passed or failed, and so on. Grafeas&#x2019; goal is to provide the infrastructure to store and manage this metadata about artifacts associated with the software supply chain.</p><p>Grafeas (Greek word for &#x201C;scribe&#x201D;) provides organizations with a central source of truth for the tracking and enforcing of policies across software development teams and pipelines. The intention is that build, auditing and compliance tools can use the Grafeas API to store, query and retrieve metadata around a wide array of artifacts and components associated with the software development and deployment life cycle.</p><p>Tracking Grafeas&#x2019; metadata can give you confidence about what containers are in your environment, and also provides the ability to enforce restrictions on which containers get deployed. Deployment tooling can be configured to review Grafeas metadata for compliance with your policies before deploying. Grafeas can be used to enforce a wide variety of security policies. For example, you can configure policies to block container images with vulnerabilities from being deployed, and ensure that deployed images are built from a base image explicitly sanctioned by your security team, or to require that images go through your build pipeline.</p><p>Grafeas divides the metadata information into <a href="https://github.com/grafeas/grafeas#notes">notes</a> and <a href="https://github.com/grafeas/grafeas#occurrences">occurrences</a>. Notes are high-level descriptions of particular types of metadata. Occurrences are instantiations of notes, which describe how and when a given note occurs on the resource associated with the occurrence. This division allows for fine-grained access control of different types of metadata.</p><h2 id="tutorial-overview">Tutorial Overview</h2><p>Let&#x2019;s consider an example of how Grafeas can provide deploy-time control for a sample MySQL implementation using a demonstration verification pipeline.</p><p>This example will use the <code>docker.io/mysql/mysql-server:8.0.12</code> container for testing. We assume that you (as the QA engineer) want to create an attestation within Grafeas that will certify this image as safe for production usage. Only this image will run on our cluster, requests to create pods based on any other image that has not been approved will be rejected by the Kubernetes control plane.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/grafeas-deployment-scenario-v0.04.png" class="kg-image" alt="Deploying Grafeas to Oracle OKE (Container Engine for Kubernetes)" loading="lazy"></figure><p>The Grafeas deployment scenario illustration describes the verification pipeline that will be implemented into our cluster:</p><ul><li>A validating admission controller is configured via specific rules to intercept all pod creation requests to the Kubernetes API server, which occurs prior to persistence of the requested object.</li><li>Once initiated, the admission controller is configured to call a validating admission webhook, which is responsible for checking with the Grafeas API whether our image is authorised to run (admission webhooks are HTTP callbacks that receive admission requests and do something with them).</li></ul><p>Grafeas uses kind-specific schemas, such that each kind of metadata information adheres to a strict schema. In our Grafeas deployment we will be referncing metadata using the Grafeas kind <code>ATTESTATION</code>, which will certify that the MySQL image complies with our deployment policy requirements. By using the validating admission webhook, we are able to check at runtime for the expected Grafeas attestations, and block deployment when they aren&#x2019;t present.</p><blockquote><em>Note: It is suggested that this tutorial only be implemented into a non mission-critical environment. The deployment scenario will prevent all pods from running, apart from only those explicitly configured with the appropriate attestations within the Grafeas deployment.</em></blockquote><h3 id="prerequisites">Prerequisites</h3><p>You will need to have deployed your OKE Kubernetes cluster before commencing to implement the deployment scenario. Follow the link to <a href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html" rel="nofollow">this tutorial</a> for guidance on the process.</p><ul><li>Create a <code>kubeconfig</code> authentication artefact. This will be used later in the tutorial to connect to the Grafeas server. Follow the link to <a href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html#DownloadthekubeconfigFilefortheCluster" rel="nofollow">this tutorial</a> for guidance on the process.</li><li>Kubernetes admission controllers need to be enabled. Since we are running Kubernetes on OKE, our required admission controllers are automatically enabled. OKE implements the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#is-there-a-recommended-set-of-admission-controllers-to-use" rel="nofollow">recommended set of admission controllers</a> for a given Kubernetes cluster version.</li></ul><h3 id="clone-the-oke-grafeas-tutorial-repository">Clone the oke-grafeas-tutorial repository</h3><p>Clone the oke-grafeas-tutorial repository:</p><pre><code class="language-bash">git clone https://gitlab.com/byteQualia/oke-grafeas-tutorial.git</code></pre><p></p><p>Commands from this point forward will assume that you are in the <code>oke-grafeas-tutorial</code> directory.</p><h3 id="deploy-grafeas-server">Deploy Grafeas server</h3><p>Run the following command to deploy your Grafeas server:</p><pre><code class="language-bash">kubectl apply -f kubernetes/grafeas.yaml</code></pre><p></p><p>In our demonstration we are using a pre-built Grafeas image based on the <a href="https://github.com/Grafeas/Grafeas/tree/master/samples/server/go-server/api/server">example Grafeas server</a>. <em>The demonstration Grafeas deployment is configured as a light-weight implementation, with configuration data as ephemeral only. As such, configuration data will not persist across restarts.</em></p><h3 id="generate-gpg-gnupg-keys">Generate GPG (GnuPG) keys</h3><p>Next we generate a <a href="https://www.gnupg.org/gph/en/manual.html#INTRO" rel="nofollow">GPG</a> keypair that will be used to sign our container image metadata. We&apos;ll then retrieve the id of the image signing key.</p><p>Assuming gpg is installed on your system, run the following command to generate a signing key:</p><pre><code class="language-bash">gpg --batch --gen-key pki/gpg.keygen</code></pre><p></p><p>Now issue the following command to retreive the ID of the gpg signing key created in the previous step:</p><pre><code class="language-bash">gpg --list-keys --keyid-format short</code></pre><p></p><p>The output from the <code>gpg --list-keys</code> command contains the key ID:</p><pre><code class="language-bash">----------------------------
pub   2048R/89BEA918 2018-09-15
uid   signatory (example key signatory) &lt;signatory@example.com&gt;
sub   2048R/B5B42C98 2018-09-15</code></pre><p></p><p>The key ID in the above output example is <code>89BEA918</code>. Take note of the unique key output generated in your shell session, and store it in the <code>GPG_KEY_ID</code> environment variable as follows:</p><pre><code class="language-bash">export GPG_KEY_ID=89BEA918</code></pre><p></p><p>Docker uses a content-addressable image store. The image ID is a SHA256 digest covering each of the images layers. We will utilise the image ID as our unique identifier for the specific image that will be explicitly permitted to run on our cluster.</p><p>We will be using the <code>mysql/mysql-server:8.0.12</code> container image as our example white-listed resource. Run the following commands to sign a text file containing the image digest for <code>mysql/mysql-server:8.0.12</code> using gpg:</p><pre><code class="language-bash">### ### ###
# Create our mysql-image-digest.txt file..
### ### ###
cat &gt;mysql-image-digest.txt &lt;&lt;EOF
sha256:58c5d4635ab6c6ec23b542a274b9881dca62de19c793a8b8227a830a83bdbbdd
EOF
### ### ###
# Sign the file..
### ### ###
gpg -u signatory@example.com \
  --armor \
  --clearsign \
  --output=mysql-signature.gpg \
  mysql-image-digest.txt
### ### ###
# Verify the signature..
### ### ###
gpg --output - --verify mysql-signature.gpg</code></pre><p></p><p>The output signature verification should be similar to the following:</p><pre><code class="language-bash">gpg: Signature made Sat 15 Sep 2018 08:48:40 AM UTC using RSA key ID 89BEA918
gpg: Good signature from &quot;signatory (example key signatory) &lt;signatory@example.com&gt;&quot;</code></pre><p></p><p>In order for others to verify signed images - they must both trust, and have access to the image signer&apos;s public key. Run the following command to export the image signer&apos;s public key:</p><pre><code class="language-bash">gpg --armor --export signatory@example.com &gt; ${GPG_KEY_ID}.pub</code></pre><p></p><h3 id="create-grafeas-objects">Create Grafeas objects</h3><p>Now that we have a signed container image and a public key for verification, we can go ahead and create a Grafeas <code>attestationAuthority</code> note, and an associated <code>pgpSignedAttestation</code> occurrence using the Grafeas API. The <code>pgpSignedAttestation</code> occurrence is used to make statements about suitability for a deployment.</p><p>First, create a secure tunnel to the Grafeas API endpoint inside your cluster:</p><pre><code class="language-bash">kubectl port-forward \
  (kubectl get pods -l app=grafeas -o jsonpath=&apos;{.items[0].metadata.name}&apos;) \
  8080:8080</code></pre><p></p><h4 id="create-grafeas-production-project">Create Grafeas production project</h4><p>Let&apos;s create a Grafeas project which will then contain our note &amp; associated occurrence:</p><pre><code class="language-bash">curl -v -X POST http://localhost:8080/v1alpha1/projects \
  -H &apos;Content-Type: application/json&apos; \
  -d &apos;{&quot;name&quot;: &quot;projects/image-signing&quot;}&apos;</code></pre><p></p><h4 id="create-grafeas-attestationauthority-note">Create Grafeas attestationAuthority note</h4><p>Then run the following commands to create the production <code>attestationAuthority</code> note:</p><pre><code class="language-bash">### ### ###
# Define the attestationAuthority Note content..
### ### ###
cat &gt;prod-note.json &lt;&lt;EOF
{
  &quot;name&quot;: &quot;projects/image-signing/notes/production&quot;,
  &quot;shortDescription&quot;: &quot;Production image signer&quot;,
  &quot;longDescription&quot;: &quot;Production image signer&quot;,
  &quot;kind&quot;: &quot;ATTESTATION_AUTHORITY&quot;,
  &quot;attestationAuthority&quot;: {
    &quot;hint&quot;: {
      &quot;humanReadableName&quot;: &quot;production&quot;
    }
  }
}
EOF
### ### ###
# Post the production attestationAuthority content..
### ### ###
curl -X POST \
  &quot;http://localhost:8080/v1alpha1/projects/image-signing/notes?noteId=production&quot; \
  -d @prod-note.json</code></pre><p></p><h4 id="create-grafeas-pgpsignedattestation-occurrence">Create Grafeas pgpSignedAttestation occurrence</h4><p>Now that the project and note have been created, we can create a <code>pgpSignedAttestation</code> occurrence referencing our production note:</p><pre><code class="language-bash">### ### ###
# Define GPG_SIGNATURE &amp; RESOURCE_URL environment variables..
### ### ###
export GPG_SIGNATURE=$(cat mysql-signature.gpg | base64)
export RESOURCE_URL=&quot;https://docker.io/mysql/mysql-server@sha256:58c5d4635ab6c6ec23b542a274b9881dca62de19c793a8b8227a830a83bdbbdd&quot;
### ### ###
# Define the pgpSignedAttestation Occurrance content..
### ### ###
cat &gt; prod-occurrence.json &lt;&lt;EOF
{
  &quot;resourceUrl&quot;: &quot;${RESOURCE_URL}&quot;,
  &quot;noteName&quot;: &quot;projects/image-signing/notes/production&quot;,
  &quot;attestationDetails&quot;: {
    &quot;pgpSignedAttestation&quot;: {
       &quot;signature&quot;: &quot;${GPG_SIGNATURE}&quot;,
       &quot;pgpKeyId&quot;: &quot;${GPG_KEY_ID}&quot;
    }
  }
}
EOF
### ### ###
# Post the pgpSignedAttestation content..
### ### ###
curl -X POST \
  &apos;http://127.0.0.1:8080/v1alpha1/projects/image-signing/occurrences&apos; \
  -d @prod-occurrence.json</code></pre><p></p><p>At this point the Grafeas configuration is complete, and the <code>hub.docker.com/r/mysql/mysql-server v8.0.12</code> image can be verified through the Grafeas API.</p><p>With the configuration that is currently in place - only the <code>hub.docker.com/r/mysql/mysql-server v8.0.12</code> image identified by the <code>sha256:58c5d4635ab6c6ec23b542a274b9881dca62de19c793a8b8227a830a83bdbbdd</code> image digest can be verified by the Grafeas API. Authorisation of additional images would require a new Grafeas occurrence.</p><h3 id="kubernetes-configuration">Kubernetes configuration</h3><p>Next we will go ahead and create the Kubernetes Validating Admission Controller, and deploy our Validating Admission Webhook.</p><h4 id="deploy-the-image-signature-webhook-to-kubernetes">Deploy the Image Signature Webhook to Kubernetes</h4><p>Run the following command to create the image-signature-webhook configmap, and store the image signer&apos;s public key:</p><pre><code class="language-bash">kubectl create configmap image-signature-webhook \
  --from-file ${GPG_KEY_ID}.pub</code></pre><p></p><p>Next we create the tls-image-signature-webhook secret, and store the TLS certificates:</p><pre><code class="language-bash">kubectl create secret tls tls-image-signature-webhook \
  --cert=pki/image-signature-webhook.pem \
  --key=pki/image-signature-webhook-key.pem</code></pre><p></p><p>Now create the image-signature-webhook deployment, and image-signature-webook ValidatingWebhookConfiguration:</p><pre><code class="language-bash">kubectl create secret tls tls-image-signature-webhook \
  --cert=pki/image-signature-webhook.pem \
  --key=pki/image-signature-webhook-key.pem</code></pre><p></p><p>At this point the Grafeas and Kubernetes configurations are complete. Requests to create pods in the cluster will now be intercepted by the validating admission controller.</p><h3 id="testing-the-admission-controller-and-webhook">Testing the Admission Controller and Webhook</h3><p>First we attempt to run the <code>oracle/nosql:4.3.11</code> container image, which doesn&apos;t have a <code>pgpSignedAttestation</code> occurrence in the Grafeas metadata repository. Attempt to create the pod:</p><pre><code class="language-bash">kubectl apply -f kubernetes/pod/nosql-server-4.3.11.yaml</code></pre><p></p><p>Notice the nosql-server pod was not created, and the following error was returned:</p><pre><code class="language-bash">The  &quot;&quot; is invalid: : No matched signatures for container image: docker.io/oracle/nosql:4.3.11</code></pre><p></p><p>The nosql-server pod wasn&apos;t created because the <code>oracle/nosql:4.3.11</code> container image was not verified by the image signature webhook. No related <code>pgpSignedAttestation</code> occurrence exists in the Grafeas metadata repository.</p><p>Now attempt to run the <code>docker.io/mysql/mysql-server@sha256:58c5d4635ab6c6ec23b542a274b9881dca62de19c793a8b8227a830a83bdbbdd</code> container image - which does have a <code>pgpSignedAttestation</code> occurrence in the Grafeas metadata repository:</p><pre><code class="language-bash">kubectl apply -f kubernetes/pod/mysql-server-8.0.12.yaml</code></pre><p></p><p>Now we receive a more familiar, successful response:</p><pre><code class="language-bash">pod &quot;mysql-server&quot; created</code></pre><p></p><p>At this point you should now have the following pods running in your cluster:</p><pre><code class="language-bash">kubectl get pods
NAME                                       READY     STATUS    RESTARTS   AGE
grafeas-7554b6bffd-6gsfq                   1/1       Running   0          58m
image-signature-webhook-6fd49f765f-8h9z9   1/1       Running   0          30m
mysql-server                               1/1       Running   0          30s
</code></pre><p></p><h3 id="logging">Logging</h3><p>To attach to your image-signature-webhook pod and get access to more detailed logging, refer to the logging section in the following <a href="https://github.com/cameronsenese/oke-grafeas-tutorial/blob/master/image-signature-webhook/README.md">instruction</a>.</p><h3 id="tidy-up">Tidy Up</h3><p>Run the following commands to remove the Kubernetes resources created during this tutorial:</p><pre><code class="language-bash">kubectl delete deployments grafeas image-signature-webhook
kubectl delete pods mysql-server
kubectl delete svc grafeas image-signature-webhook
kubectl delete secrets tls-image-signature-webhook
kubectl delete configmap image-signature-webhook
</code></pre><p></p><h2 id="conclusion">Conclusion</h2><p>Tracking Grafeas&apos; metadata allows a team to understand what containers are stored within the environment, and also to enforce restrictions on which containers get deployed. This central store of metadata promises to provide a whole new layer of visibility into the software supply chain &#x2014; and to enhance auditing and governance capabilities.</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@mitchel3uo?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Mitchell Luo</a> on <a href="https://unsplash.com/collections/9483704/c-l-c-even-more-textures?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></p>]]></content:encoded></item><item><title><![CDATA[Deploying an Oracle Serverless Function to upload OCI Usage Reports to the Oracle Autonomous Data Warehouse]]></title><description><![CDATA[In this tutorial I'll take you through a deployment scenario on OCI, whereby an Oracle Function (i.e. a managed serverless function) will be deployed to automatically to retreive OCI Usage Reports, and upload the data to an Autonomous Data Warehouse (ADW) instance.]]></description><link>https://blog.bytequalia.com/deploying-an-oracle-serverless-function-to-upload-oci-usage-reports-to-the-oracle-autonomous-data-warehouse/</link><guid isPermaLink="false">5e7c55f1b6bdac0001718a11</guid><category><![CDATA[Serverless]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Thu, 26 Mar 2020 08:40:36 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/03/pawel-czerwinski-9wAj7tkzdH4-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://blog.bytequalia.com/content/images/2020/03/pawel-czerwinski-9wAj7tkzdH4-unsplash.jpg" alt="Deploying an Oracle Serverless Function to upload OCI Usage Reports to the Oracle Autonomous Data Warehouse"><p>When using cloud computing services, it seems simple to say that managing costs is critical to success. As it turns out - the task of managing cloud computing costs can be easier said than done. Incurring unplanned costs can often be the result of a combination of factors, such as a lack of visibility about the current consumption patterns and past trends, non-standard deployments which can originate from an absence development and authorisation processes, a lack of organisation, or the absence of automated deployment and configuration tools.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/pawel-czerwinski-9wAj7tkzdH4-unsplash.jpg" class="kg-image" alt="Deploying an Oracle Serverless Function to upload OCI Usage Reports to the Oracle Autonomous Data Warehouse" loading="lazy"></figure><p>A combination of tooling and controls is critical in empowering organisations to effectively plan, forecast and manage cloud computing costs. Factors such as consolidated visibility of cloud inventory, policy based governance, role based access control, controlled stack templates, automated alerts and notifications, and cost analytics are essential.</p><p><a href="https://www.oracle.com/cloud/">Oracle Cloud Infrastructure</a> provides a compliment of native controls, billing, and payment tools that make it easy to manage your service costs. For example, OCI Budgets can be used to set and manage limits on your Oracle Cloud Infrastructure spending. You can set alerts on your budget to let you know when you might exceed your budget, and you can view all of your budgets and spending from one single place in the OCI console. Among other controls, such as Compartments and cost-tracking tags - OCI also provides a Cost Analysis dashboard, and access to Usage Reports.</p><p>An OCI Usage Report is a comma-separated value (CSV) file that can be used to get a detailed breakdown of your Oracle Cloud Infrastructure resources for audit or invoice reconciliation.</p><p>In this tutorial we&apos;ll take you through a deployment scenario on OCI, whereby an Oracle Function (i.e. a managed serverless function) will be deployed to automatically to retrieve OCI Usage Reports, and upload the data to an Autonomous Data Warehouse (ADW) instance. &#xA0;</p><p>Storing the historical Usage Data in ADW provides a well managed repository for the historical account utilisation data, as well as making it available for introspection and analysis by popular analytics tooling such as Oracle Data Visualisation Desktop (DVD), or Oracle Analytics Cloud (OAC). These powerful data analysis tools will enable you to perform deep and comprehensive introspection of cloud resource utilisation, and also provide the ability to perform predictive cost analyses based on historical usage trends.</p><blockquote>This content is syndicated from <a href="https://gitlab.com/byteQualia/oci-usage-to-adw-function">this repository</a> - references to code snippets or files herein can be found over there.</blockquote><h3 id="about-oci-usage-reports">About OCI Usage Reports</h3><p>The OCI Usage Report is automatically generated daily, and is stored in an Oracle-owned object storage bucket. It contains one row per each Oracle Cloud Infrastructure resource (such as instance, object storage bucket, VNIC) per hour along with consumption information, metadata, and tags. Usage reports are retained for one year.</p><p>A comprehensive overview of the Usage Report schema is available <a href="https://docs.cloud.oracle.com/iaas/Content/Billing/Concepts/usagereportsoverview.htm">here</a>.</p><h3 id="about-oracle-functions">About Oracle Functions</h3><p>Oracle Functions is a fully managed, highly scalable, on-demand, Functions-as-a-Service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. With Oracle Functions, you can deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.</p><p>Oracle Functions are &quot;container-native&quot;. This means that each function is a completely self-contained Docker image that is stored in your OCIR Docker Registry and pulled, deployed and invoked when you invoke your function.</p><h2 id="tutorial-overview">Tutorial Overview</h2><p>Let&#x2019;s consider an example of how we can;</p><ul><li>Provision an ADW instance &amp; a database table to host our OCI tenancy Usage Report data.</li><li>Create a custom Oracle Function to programatically retreive daily CSV data from the Object Storgae Service, and insert the data into ADW.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/03/adw-billing-deployment-scenario-v0.01-1.png" class="kg-image" alt="Deploying an Oracle Serverless Function to upload OCI Usage Reports to the Oracle Autonomous Data Warehouse" loading="lazy"></figure><p>The Oracle Function itself is written in Python (see <code><a href="https://github.com/cameronsenese/oci-usage-to-adw-function/blob/master/adw-billing/func.py">../oci-usage-to-adw-function/adw-billing/func.py</a></code>).<br>The function uses a custom container image based on oraclelinux:7-slim, and also includes oracle-instantclient19.3-basiclite, and rh-python36 (see <code><a href="https://github.com/cameronsenese/oci-usage-to-adw-function/blob/master/adw-billing/Dockerfile">../oci-usage-to-adw-function/adw-billing/Dockerfile</a></code>).</p><p>When invoked, the function uses a call to a &apos;resource principal provider&apos; that enables the function to authenticate and access the Usage Reports Object Storage (OSS) bucket, and to also download the credentials wallet used to access the ADW instance.</p><p>The function enumerates the usage reports contained within the OSS bucket, and will insert into the <code>oci_billing</code> table all Usage Reports data that has not previously been insterted. This means that the first time the function is invoked, an initial bulk upload of all historical Usage Report data will occur. For subsequent function invocations, only new Usage Data will be processed.</p><p>Resources referenced in this tutorial will be named as follows:</p><ul><li>Compartment containing the ADW instance: <em><strong>Demo-Compartment</strong></em></li><li>OCI IAM Dynamic Group Name: <em><strong>FnFunc-Demo</strong></em></li><li>Oracle Functions Application Name: <em><strong>billing</strong></em></li><li>Function Name: <em><strong>adw-billing</strong></em></li></ul><h3 id="prerequisites">Prerequisites</h3><p>The following should be completed before going ahead and creating your Oracle Cloud Function:</p><ul><li><strong>OCI Tenancy:</strong> If you don&apos;t already have an OCI tenancy, you can sign-up right <a href="https://www.oracle.com/cloud/free/">here</a> and experience the benefits of OCI with the included always free services, including Autonomous Data Warehouse!</li><li><strong>Deploy an Autonomous Data Warehouse:</strong> You will need to have deployed your Autonomous Data Warehouse instance prior to commencing implementation of the deployment scenario. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbcreating.htm">this tutorial</a> for guidance on the process.</li><li><strong>Download the DB Client Credential Package (Credentials Wallet):</strong> Information contained within the wallet will be used later in the tutorial. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Database/Tasks/adbconnecting.htm">this tutorial</a> for guidance on the process.</li><li><strong>Access the Usage Reports Bucket:</strong> OCI IAM policies are required to be configured in order to access Usage Reports. Follow <a href="https://docs.cloud.oracle.com/iaas/Content/Billing/Tasks/accessingusagereports.htm">this tutorial</a> for guidance on the process.</li><li><strong>Set up your tenancy for function development:</strong> Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Tasks/functionsconfiguringtenancies.htm">this tutorial</a> for guidance on the process.</li><li><strong>Configure Your Client Environment for Function Development:</strong> Before you can start using Oracle Functions to create and deploy functions, you have to set up your client environment for function development. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Tasks/functionsconfiguringclient.htm">this tutorial</a> for guidance on the process.</li></ul><h3 id="additional-iam-policies">Additional IAM Policies</h3><p>When a function you&apos;ve deployed to Oracle Functions is running, it can access other Oracle Cloud Infrastructure resources. To enable a function to access another Oracle Cloud Infrastructure resource, you have to include the function in a dynamic group, and then create a policy to grant the dynamic group access to that resource. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Tasks/functionsaccessingociresources.htm">this tutorial</a> for guidance on creating a dynamic group.</p><p>For our deployment scenario we&apos;ll require our &quot;FnFunc-Demo&quot; dynamic group to access both the Usage Report object storage bucket, as well as our Autonomous Database instance. To enable this, create the following dynamic group and additional IAM policies:</p><h4 id="dynamic-group">Dynamic group</h4><p>For the below dynamic group definition, the <code>resource.compartment.id</code> is that of the &quot;Demo-Compartment&quot; where the application and associated function will be deployed:</p><pre><code class="language-bash">ALL {resource.type = &apos;fnfunc&apos;, resource.compartment.id = &apos;ocid1.compartment.oc1..aaaaaaaafnaar7sww76or6eyb3j625uji3tp4cb4tosaxx4wbxvvag4dkrra&apos;}</code></pre><p></p><h4 id="iam-policies">IAM policies</h4><p>Create the additional IAM policies:</p><pre><code class="language-bash">endorse dynamic-group FnFunc-Demo to read objects in tenancy usage-report
allow dynamic-group FnFunc-Demo to use autonomous-databases in compartment Demo-Compartment where request.permission=&apos;AUTONOMOUS_DATABASE_CONTENT_READ&apos;
</code></pre><p></p><h3 id="create-oci_billing-database-table">Create <code>oci_billing</code> Database Table</h3><p>Next we create our database table which will store the Usage Report data. To do this we will use the ADW instance built-in SQL Developer Web client. Follow the link to <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/sql-developer-web.html#GUID-102845D9-6855-4944-8937-5C688939610F">this tutorial</a> for guidance on accessing SQL Developer Web as user ADMIN from your Autonomous Data Warehouse. Once connected, run the following SQL statement to create the <code>oci_billing</code> table:</p><pre><code class="language-sql">create table oci_billing(usage_report varchar2(150 CHAR), lineItem_referenceNo varchar2(150 CHAR), lineItem_tenantId varchar2(150 CHAR),  
lineItem_intervalUsageStart varchar2(150 CHAR), lineItem_intervalUsageEnd varchar2(150 CHAR), product_service varchar2(150 CHAR),  
product_resource varchar2(150 CHAR), product_compartmentId varchar2(150 CHAR), product_compartmentName varchar2(150 CHAR),  
product_region varchar2(150 CHAR), product_availabilityDomain varchar2(150 CHAR), product_resourceId varchar2(150 CHAR),  
usage_consumedQuantity varchar2(150 CHAR), usage_billedQuantity varchar2(150 CHAR), usage_consumedQuantityUnits varchar2(150 CHAR),  
usage_consumedQuantityMeasure varchar2(150 CHAR), lineItem_isCorrection varchar2(150 CHAR), lineItem_backreferenceNo varchar2(150 CHAR));
</code></pre><p></p><h3 id="create-oracle-functions-application-billing">Create Oracle Functions Application: <code>billing</code></h3><p>In Oracle Functions, an application is a logical grouping of functions &amp; a common context to store configuration variables that are available to all functions in the application.<br>Next, create an application named <code>billing</code> to host the <code>adw-billing</code> function. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Tasks/functionscreatingapps.htm">this tutorial</a> for guidance on the process.</p><p>When creating applications, Oracle recommends that you use the same region as the Docker registry that&apos;s specified in the Fn Project CLI context, and be sure to select the compartment specified in the Fn Project CLI context.</p><h3 id="create-function-adw-billing">Create Function: <code>adw-billing</code></h3><p>Now to create the actual function!</p><h4 id="clone-the-oci-adw-billing-tutorial-git-repository">Clone the <code>oci-adw-billing-tutorial</code> git repository</h4><p>First, let&apos;s clone the <code>oci-adw-billing-tutorial</code> repository:</p><pre><code class="language-bash">git clone https://gitlab.com/byteQualia/oci-usage-to-adw-function.git
</code></pre><p></p><p>Commands from this point forward will assume that you are in the <code>../oci-usage-to-adw-function/adw-billing</code> directory, which is the directory containing the function code, and other dependencies such as the Dockerfile used to build the container image, the func.yaml (function configuration file), and a Python requirements definition file.</p><h3 id="create-the-function">Create the function</h3><p>Enter the following single Fn Project command to build the function and its dependencies as a Docker image, push the image to the specified Docker registry, and deploy the function to Oracle Functions:</p><pre><code class="language-bash">fn -v deploy --app billing
</code></pre><p></p><p>The Fn Project CLI will generate output similar to the following (abbreviated) detailing the steps taken to build and deploy the function.</p><pre><code class="language-bash">Deploying adw-billing to app: billing
Bumped to version 0.0.1
Building image...
...
...
0.0.1: digest: sha256:71c0f9fac6164b676b781970b5d79b86a28838081c6ea88e00cc1cf07630ccc6 size: 1363
Updating function adw-billing using image iad.ocir.io/tenancy/fnbilling/adw-billing:0.0.1...
</code></pre><p></p><h3 id="implement-function-configuration-parameters">Implement function configuration parameters</h3><p>Now that we have our function built and deployed - it requires the creation of a number of configuration parameters in order for it to operate successfully. User defined configuration parameters are made available to the function via key-value pairs known as custom configuration parameters.</p><p>To specify custom configuration parameters using the Fn Project CLI, the following command format is used:</p><pre><code class="language-bash">fn config function &lt;app-name&gt; &lt;function-name&gt; &lt;key&gt; &lt;value&gt;
</code></pre><p></p><p>Create the following custom configuration parameters using the cofig function command:</p><p><em>-- Tenancy Usage Report Bucket</em></p><pre><code class="language-bash">fn config function billing adw-billing usage_report_bucket &lt;value&gt;
</code></pre><p></p><p>The <code>&lt;value&gt;</code> field should contain the OCI tenancy OCID.</p><p><em>-- Credentials Wallet Configuration</em></p><pre><code class="language-bash">fn config function billing adw-billing TNS_ADMIN /tmp/wallet
</code></pre><p></p><p>After invocation, the function will connect to the ADW instance and download and extract a copy of the credentials wallet (containing tsnames.ora) to the path /tmp/wallet. The <code>TNS_ADMIN</code> environment variable is used to specify the directory location for the tnsnames.ora file, which is used by the Oracle Instant Client when connecting to the ADW instance.</p><p><em>-- Database connection DSN</em></p><pre><code class="language-bash">fn config function billing adw-billing db_dsn &lt;value&gt;
</code></pre><p></p><p>The <code>&lt;value&gt;</code> field should contain the preferred DSN connection string for the database.</p><p>An ODBC DSN specifies the database server name, and other database-related information required to access the Autonomous Data Warehouse instance.<br>Available DSNs are contained within the tsnames.ora file, which is located in the credentials wallet previously downloaded in the Prerequisites section herein.</p><p>The tsnames.ora file will contain multiple DSNs of the format <code>name = (configuration data)</code>, e.g.</p><pre><code class="language-bash">db201908233333_high = (description= (address=(protocol=tcps)(port=1522)(host=adb.us-ashburn-1.oraclecloud.com))(connect_data=(service_name=aaaaaaaaaayxrey_db201908233333_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn=
        &quot;CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US&quot;))   )
</code></pre><p></p><p>In the case of the above example, the required <code>&lt;value&gt;</code> will be the name <code>db201908233333_high</code>.</p><p><em>-- Autonomous Database OCID</em></p><pre><code class="language-bash">fn config function billing adw-billing db_ocid &lt;value&gt;
</code></pre><p></p><p>The <code>&lt;value&gt;</code> field should contain the Autonomous Database instance OCID. The ADW instance OCID can be found in the OCI console on Autonomous Database Details page for your ADW instance.</p><p><em>-- Autonomous Database Username</em></p><pre><code class="language-bash">fn config function billing adw-billing db_user ADMIN</code></pre><p></p><p><em>-- Autonomous Database Password</em></p><pre><code class="language-bash">fn config function billing adw-billing db_pass &lt;value&gt;</code></pre><p></p><p>The <code>&lt;value&gt;</code> field should contain the ADW ADMIN user password specified during the instance creation.</p><h3 id="configure-function-logging">Configure function logging</h3><p>When a function you&apos;ve deployed to Oracle Functions is invoked, you&apos;ll typically want to store the function&apos;s logs so that you can review them later. You specify where Oracle Functions stores a function&apos;s logs by setting a logging policy for the application containing the function. Follow the link to <a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Tasks/functionsexportingfunctionlogfiles.htm">this tutorial</a> for guidance on the process.</p><h3 id="invoke-the-function">Invoke the function</h3><p>To invoke the function, issue the following command:</p><pre><code class="language-bash">fn invoke billing adw-billing</code></pre><p></p><p>That&apos;s it! Once completed, your function has now inserted all historical Usage Report data into your ADW instance.<br><em>Note: The current maximum run time for an Oracle Function is 120 seconds. If your tenancy has hundreds of historical Usage Reports to process (there can be up to 365), then it may take a couple of invocations to completely process the data backlog..</em></p><h3 id="function-return-data">Function return data</h3><p>The function will return a JSON array containing the name of each of the Usage Report files that were processed during the given invocation, e.g.</p><pre><code class="language-json">[&quot;0001000000010470.csv&quot;, &quot;0001000000010480.csv&quot;]
</code></pre><p></p><h3 id="inspect-function-logs">Inspect function logs</h3><p>The function has been configured to provide some basic logging regarding it&apos;s operation.<br>The following excerpt illustrates the function log data relating to the download and processing of a single Usage Report file:</p><pre><code class="language-bash">oci.base_client.139777842449152 - INFO -  2019-10-17 06:17:41.425456: Request: GET https://objectstorage.us-ashburn-1.oraclecloud.com/n/bling/b/ocid1.tenancy.oc1..aaaaaaaac3l6hgyl../o/reports/usage-csv/0001000000076336.csv.gz 
oci._vendor.urllib3.connectionpool - DEBUG - https://objectstorage.us-ashburn-1.oraclecloud.com:443 &quot;GET /n/bling/b/ocid1.tenancy.oc1..aaaaaaaac3l6hgyl../o/reports/usage-csv/0001000000076336.csv.gz HTTP/1.1&quot; 200 256997 
...
...
root - INFO - finished downloading 0001000000076336.csv.gz
root - INFO - finished uploading 0001000000076336.csv.gz
root - INFO - report_id: ocid1.tenancy.oc1..aaaaaaaac3l6hgyl..-0001000000076776: 0
root - INFO - runtime: 06.092967748641968 seconds
</code></pre><p></p><h3 id="inspect-uploaded-usage-data-via-sql-developer-web-client">Inspect uploaded Usage Data via SQL Developer Web client</h3><p>Finally - let&apos;s use the ADW instance built-in SQL Developer Web client to take a look at the usage data as stored in our data warehouse. Once connected to the SQL Developer Web client, run the following SQL statement:</p><pre><code class="language-sql">SELECT * FROM oci_billing;
</code></pre><p></p><p>On observation of the result set - you will note that each of the columns in the database correlate to fields as contained within Usage Report CSV files.</p><p>The only exception is the column <code>usage_report</code>, which has been included to help ensure records remain unique - particularly if you are hosting usage data from multiple tenancies within a single database. It&apos;s also used by the function to determine if a given Usage Report file has been previosly inserted into the database.</p><p>The <code>usage_report</code> field stores a value that is a concatenation of the OCI tenancy OCID and the Usage Report CSV file name from which the data was sourced, for example:</p><pre><code class="language-bash">USAGE_REPORT
------------------------------------------------------------------------------------------------
ocid1.tenancy.oc1..aaaaaaaac3l6hgylozzuh2bxhf3557quavpa2v6675u2kejplzalhgk4nzka-0001000000010470
ocid1.tenancy.oc1..aaaaaaaac3l6hgylozzuh2bxhf3557quavpa2v6675u2kejplzalhgk4nzka-0001000000010470
ocid1.tenancy.oc1..aaaaaaaac3l6hgylozzuh2bxhf3557quavpa2v6675u2kejplzalhgk4nzka-0001000000010470
</code></pre><p> &#xA0;</p><p>Cover Photo by <a href="https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pawe&#x142; Czerwi&#x144;ski</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Integrating Hashicorp Vault with OKE (Oracle Container Engine for Kubernetes)]]></title><description><![CDATA[Overview of a HashiCorp Vault deployment scenario on OKE, whereby Vault will be both deployed to OKE, and integrated with the OKE cluster control plane using the Vault Kubernetes Auth Method.]]></description><link>https://blog.bytequalia.com/integrating-hashicorp-vault-with-oke-oracle-container-engine-for-kubernetes/</link><guid isPermaLink="false">5e7c2b52b6bdac00017189c4</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Thu, 26 Mar 2020 04:28:08 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/03/jr-korpa-ZkngO3nRFCY-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2020/03/jr-korpa-ZkngO3nRFCY-unsplash.jpg" alt="Integrating Hashicorp Vault with OKE (Oracle Container Engine for Kubernetes)"><p>Welcome to this guide to integrating Vault with Oracle Container Engine for Kubernetes (OKE).</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/jr-korpa-ZkngO3nRFCY-unsplash.jpg" class="kg-image" alt="Integrating Hashicorp Vault with OKE (Oracle Container Engine for Kubernetes)" loading="lazy"></figure><h2 id="introduction">Introduction</h2><p>First I&apos;ll introduce you to Hashicorp Vault, a comprehensive tool designed for the secure management of secrets.</p><p>Then I&apos;ll provide an overview of the Vault <a href="https://www.vaultproject.io/docs/auth/kubernetes.html" rel="noopener">Kubernetes Auth Method</a>, which is used to facilitate authentication with Vault by using a Kubernetes Service Account Token. This method of authentication leverages native Kubernetes identity and access management, and makes it easy to introduce a Vault token into a Kubernetes Pod.</p><p>Finally, I&apos;ll provide an overview of a Vault deployment scenario on OKE, whereby Vault will be both deployed to OKE, and integrated with the OKE cluster control plane using the Vault Kubernetes Auth Method. This deployment scenario is fully documented <a href="https://cloudnative.oracle.com/template.html#infrastructure/security/Vault/tutorial.md">i</a>n this <a href="https://gitlab.com/byteQualia/oke-hashicorp-vault-tutorial">work instruction</a>, including a step-by-step guide and necessary configuration files.</p><p>Oracle Cloud Infrastructure <a href="https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengoverview.htm">Container Engine for Kubernetes</a> (often abbreviated as `OKE`) is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use OKE when your development team wants to reliably build, deploy, and manage cloud-native applications. OKE uses Kubernetes - the open-source system for automating deployment, scaling, and management of containerized applications across clusters of hosts.</p><h3 id="vault">Vault</h3><p>Secrets management is one of the core use cases for Vault. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. Vault provides a unified interface to any type of secret, while providing important features such as an API driven framework delivering tight access control, version control, and detailed audit log.</p><p>Many organisations have credentials hard coded in source code, littered throughout configuration files and configuration management tools, and stored in plaintext in version control, wikis, and shared volumes. Vault provides a central place to store these credentials, ensuring they are encrypted, access is audit logged, and exposed only to authorized clients.</p><p>Vault provides a wide array of features across secrets management, data protection, identity-based access, collaboration &amp; operations, and governance and compliance.</p><p>Vault is able to provide tight control over access to secrets and encryption keys by authenticating against trusted sources of identity, such as Active Directory, LDAP, Kubernetes, and cloud platforms. Vault enables fine grained authorization across which users and applications are permitted access to secrets and keys.</p><h3 id="kubernetes-auth-method">Kubernetes Auth Method</h3><p>When working with Vault, successful authentication is a pre-requisite step for actors to store/retrieve secrets and perform cryptographic operations. Tokens are the core method for authentication within Vault which means that the secret consumer must first acquire a valid token. The Vault authentication process verifies the client&apos;s identity (secrets consumer), and then generates a token to associate with that identity.</p><p>Vault provides a range of <a href="https://www.vaultproject.io/docs/auth/index.html" rel="noopener">Auth Methods</a> to address application requirements where running on a variety of platforms. The <a href="https://www.vaultproject.io/docs/auth/kubernetes.html" rel="noopener">Kubernetes Auth Method</a> works well for Kubernetes based orchestrators such as OKE.</p><p>The Kubernetes Auth Method is used to authenticate with Vault using a Kubernetes Service Account Token. The token for a pod&#x2019;s service account is automatically mounted within a pod at <code>/var/run/secrets/kubernetes.io/serviceaccount/</code> token when a pod is instantiated. When using the Kubernetes Auth Method, it is this token which is sent to Vault for authentication.</p><p>Vault is configured with a service account that has permissions to access the Kubernetes <a href="https://docs.openshift.org/latest/rest_api/apis-authentication.k8s.io/v1.TokenReview.html">TokenReview</a> API. This service account is used by Vault to make authenticated calls to the Kubernetes API Server in order to verify service account tokens presented by pods that want to connect to Vault to access secrets.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/03/vault_deployment_scenario_v0_01.png" class="kg-image" alt="Integrating Hashicorp Vault with OKE (Oracle Container Engine for Kubernetes)" loading="lazy"></figure><p>The Vault deployment scenario illustration describes the solution architecture implemented in the referenced <a href="https://github.com/cameronsenese/oke-hashicorp-vault-tutorial">work instruction</a>.</p><p>Flow 2, represented in orange provides a high level representation of the authentication workflow implemented via the Kubernetes Auth Method.</p><p>On successful completion, Vault returns a token to the application with pre-configured policies attached. From step 3 onward, the application can use the token to retrieve secrets from Vault&#x2019;s Key/Value secrets engine.</p><h3 id="deployment-scenario">Deployment Scenario</h3><p>The following provides a break-down of the components provisioned in the deployment scenario:</p><ul><li>OKE Kubernetes cluster</li><li>etcd and Vault Kubernetes operators</li><li>Vault cluster</li><li>Vault configured to use Kubernetes auth method</li><li>Vault Key/Value (KV) store <code>secret/testapp</code></li><li>Vault role &apos;testapp&apos; associated with the Kubernetes service account &apos;testapp&apos; in the &apos;default&apos; namespace</li><li>Vault policy &apos;testapp-kv-crud&apos; associated with &apos;testapp&apos; role (providing CRUD access to the <code>secret/testapp</code> KV store)</li><li>etcd cluster serving as persistent storage tier for the Vault cluster</li><li>Test application authenticated to vault (via Kubernetes auth) using the <code>testapp</code> service account</li><li>Test application creating and reading secrets from the Vault <code>secret/testapp</code> KV store</li></ul><p>Both the etcd and Vault clusters will be created by their respective Kubernetes operators. The Vault operator deploys and manages Vault clusters on Kubernetes. Vault instances created by the Vault operator are highly available and support automatic failover and upgrade. For each Vault cluster, the Vault operator will also create an etcd cluster for the storage backend.</p><p>The following is a high level outline of the process described in the <a href="https://cloudnative.oracle.com/template.html#infrastructure/security/Vault/tutorial.md">work instruction</a> to install and integrate Vault with OKE:</p><ol><li>Deploy the Vault &amp; etcd operators</li><li>Deploy the Vault &amp; etcd clusters</li><li>Configure Vault Kubernetes auth</li><li>Create the Vault Key/Value (KV) store &amp; associated policy for the test application</li><li>Deploy the test application and authenticate to Vault using Kubernetes auth</li><li>Create and read secrets from the KV store</li></ol><p>The work instruction is a great way to quickly get up and running with a comprehensive, HA deployment of Vault on OKE.</p><h3 id="summary">Summary</h3><p>Integrating Vault authentication with Kubernetes using the Kubernetes Auth Method serves to simplify the process of authenticating Vault clients by using a Kubernetes Service Account Token.</p><p>Some challenges do remain around solving how to manage the lifecycle of tokens in a standard way, without having to write custom application logic. The Vault team are planning a number of features which are designed to address these challenges.</p><p>One of the proposed mechanisms involves integrating Vault with the Kubernetes Secrets mechanism via a periodically running sync process. Another involves a Container Storage Interface plugin to inject secrets into a running pod. There&apos;s also mention of injecting Vault secrets into Pods via a sidecar container - each very interesting approaches to solving for this complexity.</p><p>The Kubernetes and cloud native space is moving fast - watch this space as we will continue to share solutions, tutorials, best practices, and more, all designed to spark inspiration, drive hands-on experiences, and unleash our industry&#x2019;s potential.</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@korpa?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">JR Korpa</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Scheduling OCI CLI commands to run via a Kubernetes CronJob]]></title><description><![CDATA[In this blog post I'll show you how to configure a Kubernetes CronJob to run Oracle Cloud Infrastructure CLI commands automatically, on a recurring schedule.
In the example solution, we'll be scheduling the invocation of an Oracle Serverless Function.]]></description><link>https://blog.bytequalia.com/scheduling-oci-cli-commands-to-run-via-a-kubernetes-cronjob/</link><guid isPermaLink="false">5e7b39698805fc0001e24ba3</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Serverless]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Thu, 26 Mar 2020 02:46:40 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/03/pawel-czerwinski-TkWfKC0Tb8g-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2020/03/pawel-czerwinski-TkWfKC0Tb8g-unsplash.jpg" alt="Scheduling OCI CLI commands to run via a Kubernetes CronJob"><p>In this blog post I&apos;ll show you how to configure a Kubernetes CronJob to run Oracle Cloud Infrastructure CLI commands automatically, on a recurring schedule.</p><p>In the example solution, we&apos;ll be scheduling the invocation of an Oracle Serverless Function.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/pawel-czerwinski-TkWfKC0Tb8g-unsplash.jpg" class="kg-image" alt="Scheduling OCI CLI commands to run via a Kubernetes CronJob" loading="lazy"></figure><h3 id="about-kubernetes-cronjobs">About Kubernetes CronJobs</h3><p>It&apos;s common for developers and operators to have a range of different tasks scheduled to run automatically in the background. These scheduled commands or tasks are typically known as &#x201C;Cron Jobs&#x201D;.</p><p>Kubernetes supports the creation of <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/">CronJobs</a>, which is a mechanism for configuring Kubernetes to run containerised tasks on a time-based schedule. These automated jobs run like Cron tasks on a Linux or UNIX system.</p><p>Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period.</p><h3 id="about-the-oci-cli">About the OCI CLI</h3><p>The Oracle Cloud Infrastructure CLI is a powerful, and easy to use tool that provides the same core capabilities as the Oracle Cloud Infrastructure Console, plus additional commands that can extend the Console&apos;s functionality. The CLI is convenient for developers or anyone who prefers the command line to a GUI.</p><p>The CLI supports orchestration and configuration of many OCI services, including Core Services (Networking, Compute, Block Volume), Database, Load Balancing, Serverless Functions, and many more. The complete list of supported services is <a href="https://docs.cloud.oracle.com/iaas/Content/API/Concepts/cliconcepts.htm">available here</a>.</p><h3 id="about-oracle-functions">About Oracle Functions</h3><p><a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Concepts/functionsoverview.htm">Oracle Functions</a> is a fully managed, highly scalable, on-demand, Functions-as-a-Service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure, and powered by the Fn Project open source engine.</p><p>With Oracle Functions, you can deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.</p><p>Oracle Functions are &quot;container-native&quot;. This means that each function is a completely self-contained Docker image that is stored in your OCIR Docker Registry and pulled, deployed and invoked when you invoke your function.</p><h2 id="tutorial-overview">Tutorial Overview</h2><p>First, we&apos;re going to be building a container image containing the OCI CLI, then we&apos;ll configure a Kubernetes Secret to host your CLI configuration parameters and credentials.</p><p>Once we&apos;ve pushed our container image to the OCI Registry, we&apos;ll schedule our containerised OCI CLI to invoke our Serverless Function via a Kubernetes CronJob.</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/03/kube_sched_oci_cli_v0_01.png" class="kg-image" alt="Scheduling OCI CLI commands to run via a Kubernetes CronJob" loading="lazy"></figure><h3 id="prerequisites">Prerequisites</h3><p>First, let&apos;s implement and configure the components that you need in order to deploy an OCI CLI command as a scheduled CronJob on Container Engine for Kubernetes.</p><p>You will need:</p><ul><li>An <a href="https://docs.cloud.oracle.com/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm">Oracle Container Engine for Kubernetes</a> cluster that is up and running</li><li>An <a href="https://www.oracle.com/webfolder/technetwork/tutorials/infographics/oci_faas_gettingstarted_quickview/functions_quickview_top/functions_quickview/index.html#">Oracle Cloud Function</a> provisioned in your OCI tenancy</li><li><a href="https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm">Oracle Cloud Infrastructure CLI</a> and kubectl command lines installed and configured on your development workstation</li></ul><h3 id="deployment-process">Deployment Process</h3><h6 id="1-create-kubernetes-secret">1. Create Kubernetes Secret</h6><p>In the process of working through the prerequisites section, you will have installed and configured the OCI CLI on your development workstation. As a part of this process there was the requirement to configure the CLI config file, and OCI API private key.</p><p>We&apos;re now going to copy and store these artifacts inside a Kubernetes Secret, which will be used to authenticate the CLI to your OCI tenancy each time our scheduled task runs.</p><p>From your development workstation, run the following command:</p><pre><code class="language-bash">kubectl create secret generic oci-cli-config --from-file=&lt;oci-config-file&gt;&#xA0;--from-file=&lt;rsa-private-key&gt;</code></pre><p></p><p>Substitute the values <code>&lt;oci-config-file&gt;</code> and <strong><code>&lt;rsa-private-key&gt;</code></strong> with the appropriate path directives to the files on your development workstation, e.g.</p><pre><code class="language-bash">kubectl create secret generic oci-cli-config --from-file=./.oci/config --from-file=./.oci/ssh/id_rsa_pri.pem</code></pre><p></p><p>The OCI config file and RSA private key will be automatically mounted into the container filesystem at the path <strong><code>/root/.oci</code></strong> at runtime.</p><p>As this is the default file system location for the OCI CLI config files, with no further configuration the CLI will use these files to authenticate to your OCI tenancy each time the scheduled task is invoked.</p><h6 id="2-build-the-cli-container-image">2. Build the CLI container image</h6><p>Create a directory on your development workstation named <strong><code>oci-fn-cron</code></strong> and create a file named <strong><code>Dockerfile</code></strong> in the directory with the following content:</p><pre><code class="language-docker">FROM oraclelinux:7-slim
ENV OCI_CLI_SUPPRESS_FILE_PERMISSIONS_WARNING=True
ENV PATH=&quot;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/rh/rh-python36/root/usr/bin:${PATH}&quot;
ENV LC_ALL=en_US.utf8
ENV LANG=en_US.utf8
ARG CLI_VERSION=2.6.14
RUN mkdir /oci-cli
WORKDIR /oci-cli
RUN yum -y install oracle-release-el7 &amp;&amp; \
    yum -y install oracle-softwarecollection-release-el7 &amp;&amp; \
    yum-config-manager --enable software_collections &amp;&amp; \
    yum-config-manager --enable ol7_latest ol7_optional_latest ol7_addons &amp;&amp; \
    yum-config-manager --disable ol7_ociyum_config &amp;&amp; \
    yum -y install scl-utils &amp;&amp; \
    yum -y install rh-python36 &amp;&amp; \
    yum -y install gcc &amp;&amp; \
    yum -y install wget &amp;&amp; \
    yum -y install unzip &amp;&amp; \
    rm -rf /var/cache/yum
RUN wget -qO- -O oci-cli.zip &quot;https://github.com/oracle/oci-cli/releases/download/v${CLI_VERSION}/oci-cli-${CLI_VERSION}.zip&quot; &amp;&amp; \
    unzip -q oci-cli.zip -d .. &amp;&amp; \
    rm oci-cli.zip &amp;&amp; \
    pip3 install oci_cli-*-py2.py3-none-any.whl &amp;&amp; \
    yes | oci setup autocomplete
ENTRYPOINT [&quot;/opt/rh/rh-python36/root/usr/bin/oci&quot;]</code></pre><p></p><p>From the <strong><code>oci-fn-cron</code></strong> directory, run the following command to build the CLI container image:</p><pre><code class="language-bash">docker build -t oci-fn-cron .</code></pre><p></p><h6 id="3-push-the-container-image-to-the-oci-registry">3. Push the container image to the OCI registry</h6><p>In this next step, we&apos;ll push the local copy of the image up to the cloud. <em><em>For a great walkthrough on how to use the OCI Registry service, check out <a href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/registry/index.html" rel="nofollow">this article</a>.</em></em></p><p>You will need to log into your Oracle Cloud Infrastructure console. Your user will either need to be a part of the tenancy&apos;s Administrators group, or another group with the <strong><code>REPOSITORY_CREATE</code></strong> permission.</p><p>After confirming you have the proper permissions, generate an auth token for your user. <em><em>Be sure to take a copy of the token as you will not be able to access it again.</em></em></p><p>In the OCI console, navigate to the <strong><code>Developer Services | Registry (OCIR)</code></strong> tab, and select the OCI region to which you would like to push the image. This should be the same region into which you provisioned your OKE cluster.</p><h6 id="log-into-the-oci-registry">Log into the OCI registry</h6><p>Log into the OCI registry in your development environment using the docker login command:</p><pre><code class="language-bash">docker login &lt;region-key&gt;.ocir.io</code></pre><p></p><p><strong><code>&lt;region-key&gt;</code></strong> corresponds to the code for the Oracle Cloud Infrastructure region you&apos;re using. See the <a href="https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm" rel="nofollow">this</a> reference for the available regions and associated keys.</p><p>When prompted, enter your username in the format <strong><code>&lt;tenancy_name&gt;/&lt;username&gt;</code></strong>. When prompted, enter the auth token you copied earlier as the password.</p><h6 id="tag-the-container-image">Tag the container image</h6><p>Next we&apos;ll tag the OCI CLI image that we&apos;re going to push to the OCI registry:</p><pre><code class="language-bash">docker tag oci-fn-cron:latest &lt;region-code&gt;.ocir.io/&lt;tenancy-name&gt;/oci-cron/oci-fn-cron:latest</code></pre><p></p><h6 id="push-the-image-to-the-oci-registry">Push the image to the OCI registry</h6><p>And now we&apos;ll use the docker push command to push the conainer image to the OCI registry:</p><pre><code class="language-bash">docker push &lt;region-code&gt;.ocir.io/&lt;tenancy-name&gt;/oci-cron/oci-fn-cron:latest</code></pre><p></p><p>Within the OCI console Registry UI you will now be able to see the newly created repository &amp; image.</p><h6 id="4-schedule-the-kubernetes-cronjob">4. Schedule the Kubernetes CronJob</h6><p>On your development workstation create a file named <strong><code>oci-fn-cron.yaml</code></strong> with the following content:</p><pre><code class="language-yaml">kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: oci-functions-cron
spec:
  schedule: &quot;*/5 * * * *&quot;
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: oci-functions-cron
            image: &lt;region-code&gt;.ocir.io/&lt;tenancy-name&gt;/oci-cron/oci-fn-cron:latest
            command: [&quot;/opt/rh/rh-python36/root/usr/bin/oci&quot;]
            args: [&quot;--debug&quot;, &quot;fn&quot;, &quot;function&quot;, &quot;invoke&quot;, &quot;--function-id&quot;, &quot;&lt;function-ocid&gt;&quot;, &quot;--file&quot;, &quot;-&quot;, &quot;--body&quot;, &quot;-&quot;]
            imagePullPolicy: Always
            volumeMounts:
            - name: oci-cli-config
              mountPath: &quot;/root/.oci&quot;
              readOnly: true
            ports:
            - containerPort: 9081
          restartPolicy: Never
          volumes:
          - name: oci-cli-config
            secret:
              secretName: oci-cli-config
              items:
              - key: config
                path: config
              - key: id_rsa_pri.pem
                path: ssh/id_rsa_pri.pem</code></pre><p></p><p>Remember to update the following:</p><ul><li>Line 13 with your OCI <strong><code>region-code</code></strong> and <strong><code>tenancy-name</code></strong></li><li>Line 15 <strong><code>function-ocid</code></strong> with the OCID of the Oracle Function you wish to invoke</li></ul><p>Save the file, and submit the CronJob with the following command:</p><pre><code class="language-bash">kubectl apply -f cronJob.yaml</code></pre><p></p><h6 id="verifying-cronjob-operation">Verifying CronJob operation</h6><p>You can validate that your CronJob is functioning correctly by following this procedure:</p><p>1. Obtain the job execution history by entering the command:</p><pre><code class="language-bash">kubectl get jobs --watch</code></pre><p></p><p>The output will look similar to the following:</p><pre><code class="language-bash">NAME                            COMPLETIONS   DURATION   AGE
oci-functions-cron-1575886560   1/1           43s        4m45s
oci-functions-cron-1575886680   1/1           34s        2m44s
oci-functions-cron-1575886800   1/1           35s        44s</code></pre><p></p><p>2. Enter the following command to obtain the pod name associated with the scheduled job, replacing <strong><code>&lt;job-name&gt;</code></strong> with the job name received via the previous command output:</p><pre><code class="language-bash">kubectl get pods --selector=job-name=&lt;job-name&gt;&#xA0;--output=jsonpath={.items[*].metadata.name}</code></pre><p></p><p>3. Enter the following command to obtain the logs associated with the executed CLI command, replacing <strong><code>&lt;pod-name&gt;</code></strong> with the pod name name received via the previous command output:</p><pre><code class="language-bash">kubectl logs &lt;pod-name&gt;</code></pre><p></p><p>If your function was invoked correctly, the output will look similar to the following - which is the log data generated by the CLI running the <strong><code>fn function invoke</code></strong> command which we defined in the Kubernetes CronJob:</p><pre><code class="language-bash">INFO:oci.base_client.140263433586560: 2019-12-09 21:54:04.502876: Request: GET https://functions.us-ashburn-1.oci.oraclecloud.com/20181201/functions/ocid1.fnfunc.oc1.iad.aaaaaaaaadfmkqscppi63jistu4t7au2veexg5in6lykzovzmvaja6vqmwsa
DEBUG:oci.base_client.140263433586560: 2019-12-09 21:54:04.601030: time elapsed for request 4AFF967835C440009D15F3CFAAC404D2: 0.0978535171598196
DEBUG:oci.base_client.140263433586560: 2019-12-09 21:54:04.601216: time elapsed in response: 0:00:00.092832
DEBUG:oci.base_client.140263433586560: 2019-12-09 21:54:04.601319: Response status: 200
DEBUG:oci.base_client.140263433586560: 2019-12-09 21:54:04.603549: python SDK time elapsed for deserializing: 0.0020453811157494783
DEBUG:oci.base_client.140263433586560: 2019-12-09 21:54:04.603681: Response returned
DEBUG:oci.base_client.140263433586560:time elapsed for request: 0.1009602730628103
INFO:oci.base_client.140263433271056: 2019-12-09 21:54:04.608421: Request: POST https://newg3h4jqoq.usashburn1.functions.oci.oraclecloud.com/20181201/functions/ocid1.fnfunc.oc1.iad.aaaaaaaaadfmkqscppi63jistu4t7au2veexg5in6lykzovzmvaja6vqmwsa/actions/invoke
DEBUG:oci.base_client.140263433271056: 2019-12-09 21:54:37.288483: time elapsed for request 7C625DE5724D4C3B8E26A771D3F7F87B: 32.679952513892204
DEBUG:oci.base_client.140263433271056: 2019-12-09 21:54:37.288662: time elapsed in response: 0:00:32.676453
DEBUG:oci.base_client.140263433271056: 2019-12-09 21:54:37.288778: Response status: 200
DEBUG:oci.base_client.140263433271056: 2019-12-09 21:54:37.288893: Response returned
DEBUG:oci.base_client.140263433271056:time elapsed for request: 32.68057371187024</code></pre><p></p><h6 id="altering-the-cronjob-schedule">Altering the CronJob Schedule</h6><p>With Kubernetes, one CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in <a href="https://en.wikipedia.org/wiki/Cron">Cron</a> format. For the sake of example, the above CronJob object will invoke the Oracle Function every 5 minutes.</p><p>To have your function run on a different schedule - simply modify the <strong><code>schedule</code></strong> as defined on line 6 in <strong><code>oci-fn-cron.yaml</code></strong>, and resubmit the CronJob.</p><h3 id="scheduling-different-cli-operations">Scheduling Different CLI Operations</h3><p>In the example solution we&apos;ve scheduled the OCI CLI to invoke an Oracle Serverless Function on a regular interval.</p><p>Now that you have the solution in place, it&apos;s actually very easy to schedule additional Kubernetes CronJobs that execute different CLI commands.</p><p>All that&apos;s required is to create additional CronJob yaml files, in each updating the job <strong><code>name</code></strong> (line 4 in the example <strong><code>oci-fn-cron.yaml</code></strong>) and the OCI CLI command to execute via <strong><code>args</code></strong> (line 15 in the example <strong><code>oci-fn-cron.yaml</code></strong>) to suit your requirements.</p><p>The complete list of &#xA0;services supported by the OCI CLI is <a href="https://docs.cloud.oracle.com/iaas/Content/API/Concepts/cliconcepts.htm">available here</a>, and the range of possible use-cases is almost limitless!</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@pawel_czerwinski?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Pawe&#x142; Czerwi&#x144;ski</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[Working with HTTP in Oracle Functions using the Fn Project Python FDK]]></title><description><![CDATA[When working with both the Fn Project and the Oracle Functions service, it's possible to access information about the invocation, function, and execution environment from within a running function - including HTTP properties such as custom HTTP headers and HTTP query parameters.]]></description><link>https://blog.bytequalia.com/working-with-http-in-oracle-functions-using-the-fn-project-python-fdk/</link><guid isPermaLink="false">5e7b11628805fc0001e24b1d</guid><category><![CDATA[Serverless]]></category><category><![CDATA[Cloud Native]]></category><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Wed, 25 Mar 2020 08:34:43 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/03/steve-johnson-RqLYVtETwR8-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2020/03/steve-johnson-RqLYVtETwR8-unsplash.jpg" alt="Working with HTTP in Oracle Functions using the Fn Project Python FDK"><p>When working with both the Fn Project and the Oracle Functions service, it&apos;s possible to access information about the invocation, function, and execution environment from within a running function - including HTTP properties such as custom HTTP headers and HTTP query parameters.</p><p>In this blog post, I&apos;ll show you how to work with HTTP requests when building your Fn functions using the Python Function Development Kit (FDK).</p><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/04/steve-johnson-RqLYVtETwR8-unsplash.jpg" class="kg-image" alt="Working with HTTP in Oracle Functions using the Fn Project Python FDK" loading="lazy"></figure><h3 id="scenario">Scenario</h3><p>I was recently working on an IoT implementation where devices had been configured to remotely invoke an Oracle Function in order to retrieve configuration data.</p><p>When invoked by an IoT device, the Oracle Function was configured to connect to a downstream system to retrieve configuration data specific to the requesting device, and in-turn respond with a JSON payload.</p><p>In order to meet its functional requirement, the Oracle Function (in this case, implemented in Python) required at runtime, access to a custom HTTP header submitted by the requesting device.</p><p>Along the lines of the example scenario - accessing information about the invocation, function, and execution environment when working with serverless functions is a typical requirement.</p><p>The Fn Project implements a number of features, including Function Development Kits (FDKs) to simplify and standardise the developer experience when working with such data.</p><h3 id="about-the-fn-project-and-oracle-functions">About the Fn Project and Oracle Functions</h3><p><a href="https://docs.cloud.oracle.com/iaas/Content/Functions/Concepts/functionsoverview.htm">Oracle Functions</a> is a fully managed, highly scalable, on-demand, Functions-as-a-Service (FaaS) platform, built on enterprise-grade Oracle Cloud Infrastructure, and powered by the <a href="https://fnproject.io/">Fn Project</a> open source engine.</p><p>With Oracle Functions, you can deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.</p><p>Oracle Functions are &quot;container-native&quot;. This means that each function is a completely self-contained Docker image that is stored in your OCIR Docker Registry and pulled, deployed and invoked when you invoke your function.</p><h3 id="fn-project-function-development-kits">Fn Project Function Development Kits</h3><p>The Fn Project provides Function Development Kits (FDKs) with support for a variety of programming languages, including: Java, Python, Ruby, Node, &amp; Go.</p><p>FDKs are designed to abstract away from developers the requirement to interact directly with underlying low-level constructs, or to perform complex work such as protocol framing.</p><p>At runtime FDKs execute in three phases, and in the following order:</p><ul><li>Request: with request data deserialised into a request context and request data</li><li>Execute: with request context and request data</li><li>Respond: with response data being rendered into a formatted response</li></ul><p>At the time you create an Fn function, you specify a <em><em>handler</em></em>, which is a function in your code, that Oracle Functions can invoke when the service executes your code.</p><p>The handler accepts a callable object: <strong><code>fdk.handle({callable_object})</code></strong>, and the callable object implements a signature with the format: <strong><code>(context, data)</code></strong>.</p><p>The following general syntax structure is used when creating a handler function in Python.</p><pre><code class="language-python">def handler(ctx, data):
    ...
    return response</code></pre><p></p><p>The request <strong><code>data</code></strong> is obtained from the request used to trigger the function. In the case of an HTTP invocation, <strong><code>data</code></strong> is an HTTP request body.</p><p>When Oracle Functions invokes your function, it passes a context object <strong><code>ctx</code></strong> to the handler. This object provides methods and properties that provide information about the invocation, function, and execution environment.</p><h3 id="fdk-request-context">FDK Request Context</h3><p>The following describes the range of data exposed by the attributes of a request context object when working with the Python FDK:</p><p><strong>Config</strong><br> &#xA0; &#xA0;Class: <strong><code>os._Environ</code></strong><br> &#xA0; &#xA0;Configuration data for the current application and current function.<br><strong>Headers</strong><br> &#xA0; &#xA0;Class: <strong><code>dict</code></strong><br> &#xA0; &#xA0;HTTP headers included with the request submitted to invoke the current function.<br><strong>AppID</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;Unique Oracle Cloud ID (OCID) assigned to the application.<br><strong>FnID</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;Unique Oracle Cloud ID (OCID) assigned to the function.<br><strong>CallID</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;Unique ID assigned to the request.<br><strong>Format</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;The function&#x2019;s communication format - the interaction protocol between Fn and the function.<br><strong>Deadline</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;How soon function the will be aborted, including timeout date and time.<br><strong>RequestURL</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;Request URL that was used to invoke the current function.<br><strong>Method</strong><br> &#xA0; &#xA0;Class: <strong><code>str</code></strong><br> &#xA0; &#xA0;HTTP method used to invoke the current function.</p><h3 id="example-function">Example Function</h3><p>Here&apos;s an example of a simple hello world function which will include within the response data the full set of attributes exposed by the request context object:</p><pre><code class="language-python">import io
import json
from fdk import response
 
def handler(ctx, data: io.BytesIO=None):
    name = &quot;World&quot;
    try:
        body = json.loads(data.getvalue())
        name = body.get(&quot;name&quot;)
    except (Exception, ValueError) as ex:
        print(str(ex))
    return response.Response(
        ctx, response_data=json.dumps(
            {&quot;Message&quot;: &quot;Hello {0}&quot;.format(name),
            &quot;ctx.Config&quot; : dict(ctx.Config()),
            &quot;ctx.Headers&quot; : ctx.Headers(),
            &quot;ctx.AppID&quot; : ctx.AppID(),
            &quot;ctx.FnID&quot; : ctx.FnID(),
            &quot;ctx.CallID&quot; : ctx.CallID(),
            &quot;ctx.Format&quot; : ctx.Format(),
            &quot;ctx.Deadline&quot; : ctx.Deadline(),
            &quot;ctx.RequestURL&quot;: ctx.RequestURL(),
            &quot;ctx.Method&quot;: ctx.Method()},
            sort_keys=True, indent=4),
        headers={&quot;Content-Type&quot;: &quot;application/json&quot;}
    )</code></pre><p></p><p>When invoked, the hello world function response data is returned as JSON formatted:</p><pre><code class="language-json">    &quot;Message&quot;: &quot;Hello World&quot;,
    &quot;ctx.AppID&quot;: &quot;ocid1.fnapp.oc1.iad.aaaaaaaaafkiyvdtalsn6aako2i6jttllk7tgaj4v4hgpnccwggd00000000&quot;,
    &quot;ctx.CallID&quot;: &quot;01E3BCBYFR1BT163GZ00000000&quot;,
    &quot;ctx.Config&quot;: {
        &quot;FN_APP_ID&quot;: &quot;ocid1.fnapp.oc1.iad.aaaaaaaaafkiyvdtalsn6aako2i6jttllk7tgaj4v4hgpnccwggd00000000&quot;,
        &quot;FN_CPUS&quot;: &quot;100m&quot;,
        &quot;FN_FN_ID&quot;: &quot;ocid1.fnfunc.oc1.iad.aaaaaaaaadjx7atmmfcbm6ipfw67bykoh2lniadadurqiex2p3d500000000&quot;,
        &quot;FN_FORMAT&quot;: &quot;http-stream&quot;,
        &quot;FN_LISTENER&quot;: &quot;unix:/tmp/iofs/lsnr.sock&quot;,
        &quot;FN_MEMORY&quot;: &quot;256&quot;,
        &quot;FN_TYPE&quot;: &quot;sync&quot;,
        &quot;GPG_KEY&quot;: &quot;0D96DF4D4110E5C43FBFB17F2D347EA600000000&quot;,
        &quot;HOME&quot;: &quot;/home/fn&quot;,
        &quot;HOSTNAME&quot;: &quot;71c2e839ca59&quot;,
        &quot;LANG&quot;: &quot;C.UTF-8&quot;,
        &quot;OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM&quot;: &quot;/.oci-credentials/private.pem&quot;,
        &quot;OCI_RESOURCE_PRINCIPAL_REGION&quot;: &quot;us-ashburn-1&quot;,
        &quot;OCI_RESOURCE_PRINCIPAL_RPST&quot;: &quot;/.oci-credentials/rpst&quot;,
        &quot;OCI_RESOURCE_PRINCIPAL_VERSION&quot;: &quot;2.2&quot;,
        &quot;PATH&quot;: &quot;/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&quot;,
        &quot;PYTHONPATH&quot;: &quot;/python&quot;,
        &quot;PYTHON_GET_PIP_SHA256&quot;: &quot;b86f36cc4345ae87bfd4f10ef6b2dbfa7a872fbff70608a1e43944d283fd0eee&quot;,
        &quot;PYTHON_GET_PIP_URL&quot;: &quot;https://github.com/pypa/get-pip/raw/ffe826207a010164265d9cc807978e3604d18ca0/get-pip.py&quot;,
        &quot;PYTHON_PIP_VERSION&quot;: &quot;19.3.1&quot;,
        &quot;PYTHON_VERSION&quot;: &quot;3.6.9&quot;,
        &quot;card&quot;: &quot;not_set&quot;
    },
    &quot;ctx.Deadline&quot;: &quot;2020-03-14T02:01:58Z&quot;,
    &quot;ctx.FnID&quot;: &quot;ocid1.fnfunc.oc1.iad.aaaaaaaaadjx7atmmfcbm6ipfw67bykoh2lniadadurqiex2p3d500000000&quot;,
    &quot;ctx.Format&quot;: &quot;http-stream&quot;,
    &quot;ctx.Headers&quot;: {
        &quot;accept&quot;: &quot;*/*&quot;,
        &quot;accept-encoding&quot;: &quot;gzip&quot;,
        &quot;content-type&quot;: &quot;application/octet-stream&quot;,
        &quot;date&quot;: &quot;Sat, 14 Mar 2020 02:01:03 GMT&quot;,
        &quot;fn-call-id&quot;: &quot;01E3BCBYFR1BT163GZ00000000&quot;,
        &quot;fn-deadline&quot;: &quot;2020-03-14T02:01:58Z&quot;,
        &quot;fn-http-method&quot;: &quot;GET&quot;,
        &quot;fn-http-request-url&quot;: &quot;/gtc/ml?card=CC50E3CCBFFF&quot;,
        &quot;fn-intent&quot;: &quot;httprequest&quot;,
        &quot;fn-invoke-type&quot;: &quot;sync&quot;,
        &quot;host&quot;: &quot;localhost&quot;,
        &quot;oci-subject-id&quot;: &quot;ocid1.apigateway.oc1.iad.amaaaaaap7nzmjiajosozyrcbvtwhtqdd4zvlojl4qn4teauugge00000000&quot;,
        &quot;oci-subject-tenancy-id&quot;: &quot;ocid1.tenancy.oc1..aaaaaaaac3l6hgylozzuh2bxhf3557quavpa2v6675u2kejplzal00000000&quot;,
        &quot;oci-subject-type&quot;: &quot;resource&quot;,
        &quot;opc-request-id&quot;: &quot;/7FAB0BCD835B93B731AF16E754880DA2/01E3BCBYD01BT163GZ00000000&quot;,
        &quot;orwarded&quot;: &quot;for=111.111.111.111&quot;,
        &quot;ost&quot;: &quot;b65alubkumgiuzhn3400000000.apigateway.us-ashburn-1.oci.customer-oci.com&quot;,
        &quot;transfer-encoding&quot;: &quot;chunked&quot;,
        &quot;user-agent&quot;: &quot;curl/7.47.0&quot;,
        &quot;x-content-sha256&quot;: &quot;47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZ00000000&quot;,
        &quot;x-forwarded-for&quot;: &quot;111.111.111.111&quot;,
        &quot;x-forwarded-host&quot;: &quot;ggd00000000.us-ashburn-1.functions.oci.oraclecloud.com:443&quot;,
        &quot;x-forwarded-port&quot;: &quot;443&quot;,
        &quot;x-forwarded-proto&quot;: &quot;https&quot;,
        &quot;x-real-ip&quot;: &quot;111.111.111.111&quot;,
        &quot;x-device-id&quot;: &quot;CC50E3CCB000&quot;
    },
    &quot;ctx.Method&quot;: &quot;GET&quot;,
    &quot;ctx.RequestURL&quot;: &quot;/gtc/ml?device-id=CC50E3CCB000&quot;</code></pre><p></p><p>With the IoT scenario in mind, and in reference to the JSON response data - it&apos;s apparent that we&apos;re able to obtain the requesting IoT device unique ID from the custom HTTP header <strong><code>x-device-id</code></strong>, which is available via the request context object attribute <strong><code>Headers</code></strong>.</p><p>Another option for submitting custom data to the function to be available via the request context object is to include HTTP request parameters within the HTTP request submitted to invoke a function.</p><p>HTTP request parameters are exposed by the request context object attribute <strong><code>requestURL</code></strong>. Per the example response data, the IoT device ID was also submitted via HTTP request parameter <strong><code>device-id</code></strong>, and exposed by the request context object attribute <strong><code>requestURL</code></strong>.</p><p>There&apos;s a wealth of contextual and environmental data exposed by the request context object, and through the context object it&apos;s easily available for a developer to work with when constructing functions in Python.</p><p>If you&apos;re not already working with Oracle Functions on OCI, get started today by heading over to <a href="https://www.oracle.com/cloud/free/">https://www.oracle.com/cloud/free/</a> to access a free trial, and unlock access to the <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/FreeTier/resourceref.htm">always free services</a>.</p><p></p><p>Cover Photo by <a href="https://unsplash.com/@steve_j?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Steve Johnson</a> on <a href="https://unsplash.com/t/textures-patterns?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item><item><title><![CDATA[About]]></title><description><![CDATA[byteQualia is the personal weblog of Cameron Senese.
It's a central repository for projects, articles, ideas, and learnings which are typically related to contemporary computing concepts.]]></description><link>https://blog.bytequalia.com/about/</link><guid isPermaLink="false">5e787fe007b0000001c1ef05</guid><dc:creator><![CDATA[Cameron Senese]]></dc:creator><pubDate>Mon, 23 Mar 2020 09:32:09 GMT</pubDate><media:content url="https://blog.bytequalia.com/content/images/2020/03/susan-yin-2JIvboGLeho-unsplash-6.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.bytequalia.com/content/images/2020/03/susan-yin-2JIvboGLeho-unsplash-6.jpg" alt="About"><p>Welcome to the byteQualia blog!</p><blockquote><em>byteQualia</em></blockquote><blockquote><em>The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer.</em></blockquote><blockquote><em>There are many definitions of qualia, which have changed over time. One of the simpler, broader definitions is: The &apos;what it is like&apos; character of mental states.</em></blockquote><figure class="kg-card kg-image-card"><img src="https://blog.bytequalia.com/content/images/2020/03/susan-yin-2JIvboGLeho-unsplash-7.jpg" class="kg-image" alt="About" loading="lazy"></figure><p>This is a personal weblog. Any views or opinions represented in this weblog are personal and belong solely to the blog author, and do not represent those of people, institutions, organisations that the author may or may not be associated with in professional or personal capacity, unless explicitly stated.</p><p>Any views or opinion are not intended to malign any religion, ethnic group, club, organisation, company, or individual.</p><p></p><p>Photo by <a href="https://unsplash.com/@syinq?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Susan Yin</a> on <a href="https://unsplash.com/s/photos/author?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</p>]]></content:encoded></item></channel></rss>