Building a CLI utility for managing cloud services lifecycle using the OCI Go SDK
okectl is an open source CLI utility designed for use with Oracle Container Engine for Kubernetes (OKE). okectl provides a command-line interface for management of OKE and associated resources, including Kubernetes cluster lifecycle.
Introduction
okectl is designed as a stand-alone tool to automate operations such as the Kubernetes cluster creation process, and is typically best used as part of an automation pipeline.
Aside from being a useful automation tool, it's also a useful example of a practical application development scenario that leverages an OCI Software Development Kit (SDK).
In this blog post I'll provide an overview of the OCI Go SDK, as well as an overview of okectl, and how it's built.
About okectl
okectl is built using the Go SDK for Oracle Cloud Infrastructure (OCI).
Supported Operations
--createOkeCluster
Creates Kubernetes cluster control plane, node pool, worker nodes, & configuration data (kubeconfig & JSON cluster desctiption).--deleteOkeCluster
Deletes specified cluster.--getOkeNodePool
Retreives cluster, node pool, and node details for a specified node pool.--createOkeKubeconfig
Creates kubeconfig authentication artefact for kubectl.
Interesting Features
As a little context, I created okectl a while back - right at the time when the OKE service had just been released.
I needed a means to end-end provision a Kubernetes cluster to OCI entirely as code. Terraform provided the ability to create all of OKE's OCI related dependencies; including VCN, subnets, load balancers, etc, but at the time lacked support for the OKE service.
I created okectl to solve for this gap. okectl was created to work in tandem with Terraform; that is, to be executed by Terraform as a local-exec
operation. With okectl, a single Terraform configuration could be composed to first create the OCI resources necessary to support an OKE cluster, then in-turn run okectl to build the cluster.
With this use-case in mind, I also built some interesting features into okectl to provide Terraform with the ability to perform automated, remote software installation to OKE clusters - again, as a part of a single Terraform configuration (in this case, I leverage Terraform remote-exec
and Helm).
Note: The OCI Terraform provider now supports OKE, including operations such as cluster lifecycle management.
--waitNodesActive
By default, OKE will report the status of a worker node pool as ACTIVE
when the node pool entity itself is created - however this precedes the instantiation of the worker nodes themselves. It will typically be some minutes after the node pool is ACTIVE
that the worker nodes in the pool will have been instantiated, software installation completed, and the worker nodes themselves reach the status of being ACTIVE
- thus ready to run containers in support of the cluster.
--getOkeNodePool
retrieves cluster, node pool, and node details for a specified node pool, and also provides the ability to wait for nodes in a given node pool to become fully active, via the flag --waitNodesActive
.
This is handy when used as part of automation pipeline, where any next action is dependent on having cluster worker nodes fully provisioned and active. e.g. use when Terraforming to pause all operations until worker nodes are active and ready for further configuration or software installation. (See flag --tfExrernalDS
below for further detail on providing worker node IP address to Terraform).
--waitNodesActive
has three modes of operation:
--waitNodesActive=false
okectl will not wait & will return when the nominated node pool isACTIVE
, regardless of the state of nodes within the pool.--waitNodesActive=any
okectl will wait & return when the nominated node pool isACTIVE
, and any of the nodes in the nominated node pool return asACTIVE
.--waitNodesActive=all
okectl will wait & return when the nominated node pool isACTIVE
, and all of the nodes in the nominated node pool return asACTIVE
.
okectl implements --waitNodesActive
by enumerating the lifecycle-state of nodes in a given node pool via the NodeLifecycleStateEnum
enumerator within the OCI Go SDK.
Node lifecycle state is exposed via the SDK (ContainerEngineClient) GetNodePool
function, which in the following example is providing data to okectl's getNodeLifecycleState
function:
// get worker node lifecycle status..
func getNodeLifeCycleState(
ctx context.Context,
client containerengine.ContainerEngineClient,
nodePoolId string) containerengine.GetNodePoolResponse {
req := containerengine.GetNodePoolRequest{}
req.NodePoolId = common.String(nodePoolId)
resp, err := client.GetNodePool(ctx, req)
helpers.FatalIfError(err)
// marshal & parse json..
nodePoolResp := resp.NodePool
nodesJson, _ := json.Marshal(nodePoolResp)
jsonParsed, _ := gabs.ParseJSON(nodesJson)
nodeLifeCycleState = (jsonParsed.Path("nodes.lifecycleState").String())
return resp
}
--tfExternalDs
Where the --getOkeNodePool
flag --tfExternalDs=true
is used, okectl will run as a Terraform external data source.
The Terraform external data source allows an external program implementing a specific protocol to act as a data source, exposing arbitrary data to Terraform for use elsewhere in the Terraform configuration.
In this circumstance, okectl provides a JSON response containing the public IP address of a worker node in a format compatible with the Terraform external data source specification:
./okectl getOkeNodePool --tfExternalDs=true
{"workerNodeIp":"132.145.156.184"}
In combination with the --waitNodesActive
flag, this provides the ability to have Terraform first wait for worker nodes in a new node pool to become active, then obtain the public IP address of a worker node from okectl. With the public IP address of the worker node, Terraform can proceed to call a remote-exec
provisioner to then perform operations such as cluster configuration, and perform application workload deployment.
About the OCI Software Development Kits
Oracle Cloud Infrastructure provides a number of SDKs to facilitate development of custom solutions.
The OCI SDKs are designed to streamline the process of building and deploying applications that integrate with Oracle Cloud Infrastructure services.
Each SDK provides the tools you need to develop an application, including code samples and documentation to create, test, and troubleshoot solutions.
If you want to contribute to the development of the SDKs, they are all open source and available on GitHub.
At present Oracle offer SDKs for Java, Python, Ruby, & Go.
OCI REST APIs & SDKs
Generally speaking, the Oracle Cloud Infrastructure APIs are typical REST APIs that adopt the following characteristics:
- The Oracle Cloud Infrastructure APIs use standard HTTP requests and responses.
- All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2.
- All Oracle Cloud Infrastructure API requests must be signed for authentication purposes.
Further detail regarding the OCI REST APIs can be found here, including the API for the Container Engine for Kubernetes service - which can be used to build, deploy, and manage OKE clusters.
Whilst it's possible to to program directly against the OCI REST APIs, the OCI SDKs provide a lot of pre-built functionality, and abstract away much of the complexity required when interacting directly with APIs - for example, creating authorisation signatures, and parsing responses.
In addition to the SDKs, Oracle also provide the OCI CLI and the OCI Terraform provider as additional options for a more streamlined experience when developing with the OCI REST API.
OCI Go SDK
The OCI Go SDK contains the following components:
- Service packages: All packages except
common
and any other package found insidecmd
. These packages represent the Oracle Cloud Infrastructure services supported by the Go SDK. Each package represents a service. These packages include methods to interact with the service, structs that model input and output parameters, and a client struct that acts as receiver for the above methods. - Common package: Found in the
common
directory. The common package provides supporting functions and structs used by service packages. Includes HTTP request/response (de)serialization, request signing, JSON parsing, pointer to reference and other helper functions. Most of the functions in this package are meant to be used by the service packages. - cmd: Internal tools used by the
oci-go-sdk
.
The Go SDK also provides a broad range of examples for programming with many of the available OCI services, including the core services (compute, network, etc.), identity and access management, database, email, DNS, and more.
Full documentation can be found on the GoDocs site here.
To start working with the Go SDK, you need to import the service packages that serve your requirements, create a client, and then proceed to use the client to make calls.
okectl utilises the common
, containerengine
, and helpers
packages from the OCI Go SDK:
// import libraries..
import (
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"strings"
"time"
"github.com/Jeffail/gabs"
"gopkg.in/alecthomas/kingpin.v2"
"github.com/oracle/oci-go-sdk/common"
"github.com/oracle/oci-go-sdk/containerengine"
"github.com/oracle/oci-go-sdk/example/helpers"
)
The containerengine
package is provided to simplify much of the heavy lifting associated with the orchestration of the OKE service. okectl leverages the containerengine
package to create/destroy clusters, cluster node pools, and kubeconfig artefacts.
Before using the SDK to interact with a service, we first call the common.DefaultConfigProvider()
function to provide necessary configuration and authentication data. See the Authentication section for information on creating a configuration file.
config := common.DefaultConfigProvider()
client, err := identity.NewContainerEngineClientWithConfigurationProvider(config)
if err != nil {
panic(err)
}
After successfully creating a client, requests can now be made to the service. Generally, all functions associated with an operation accept context.Context
and a struct that wraps all input parameters. The functions then return a response struct that contains the desired data, and an error struct that describes the error if an error occurs:
// create cluster..
func createCluster(
ctx context.Context,
client containerengine.ContainerEngineClient,
clusterName, vcnId, compartmentId, kubeVersion, subnet1Id, subnet2Id string) containerengine.CreateClusterResponse {
req := containerengine.CreateClusterRequest{}
req.Name = common.String(clusterName)
req.CompartmentId = common.String(compartmentId)
req.VcnId = common.String(vcnId)
req.KubernetesVersion = common.String(kubeVersion)
req.Options = &containerengine.ClusterCreateOptions{
ServiceLbSubnetIds: []string{subnet1Id, subnet2Id},
AddOns: &containerengine.AddOnOptions{
IsKubernetesDashboardEnabled: common.Bool(true),
IsTillerEnabled: common.Bool(true),
},
}
fmt.Println("OKECTL :: Create Cluster :: Submitted ...")
resp, err := client.CreateCluster(ctx, req)
helpers.FatalIfError(err)
return resp
}
OCI applies throttling to many API requests to prevent accidental or abusive use of resources. If you make too many requests too quickly, you might see some succeed and others fail. Oracle recommends that you implement an exponential back-off, starting from a few seconds to a maximum of 60 seconds.
The helpers
package implements a number of features, including an exponential retry backoff mechanism that's used to rate limit API polling.
okectl uses this to gracefully wait for operations to complete, such as cluster or node-pool creation:
// wait for create cluster completion..
workReqRespCls := waitUntilWorkRequestComplete(c, createClusterResp.OpcWorkRequestId)
fmt.Println("OKECTL :: Create Cluster :: Complete ...")
clusterId := getResourceID(workReqRespCls.Resources, containerengine.WorkRequestResourceActionTypeCreated, "CLUSTER")
Installation
Installing the Go SDK is simple, use the go get
command to download the package (and any dependencies), and automatically install:
go get -u github.com/oracle/oci-go-sdk
Authentication
Oracle Cloud Infrastructure SDKs require basic configuration information, like user credentials and tenancy OCID. You can provide this information by:
- Using a configuration file
- Declaring a configuration at runtime
See the following overview for information on how to create a configuration file.
To declare a configuration at runtime, implement the ConfigurationProvider
interface shown below:
// ConfigurationProvider wraps information about the account ownertype ConfigurationProvider interface {
KeyProvider
TenancyOCID() (string, error)
UserOCID() (string, error)
KeyFingerprint() (string, error)
Region() (string, error)
}
Debug
The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw HTTP requests, responses and potential errors when (un)marshalling request and responses.
Built-in logging in the SDK is controlled via the environment variable OCI_GO_SDK_DEBUG
and its contents.
The below are possible values for the OCI_GO_SDK_DEBUG
variable:
info
ori
enables all info logging messagesdebug
ord
enables all debug and info logging messagesverbose
orv
or1
enables all verbose, debug and info logging messagesnull
turns all logging messages off
For example:
OCI_GO_SDK_DEBUG=1
Building okectl from source
Dependencies
- Install the Go programming language
- Install the Go SDK for Oracle Cloud Infrastructure
After installing Go and the OCI Go SDK, clone the okectl
repository:
git clone https://gitlab.com/byteQualia/okectl.git
Commands from this point forward will assume that you are in the ../okectl
directory.
Build
Build an okectl Linux compatible binary as follows:
GOOS=linux
GOARCH=amd64
go build -v okectl.go
Further information and pre-built binaries can be found at the okectl repository.
Using okectl
okectl requires configuration data via command-line arguments & associated flags. Command-line flags provide data relating to both the OCI tenancy, and also OKE cluster configuration parameters.
okectl implements Kingpin to manage command line and flag parsing. I chose Kingpin for this project as it's a type-safe command-line parser that provides straight-forward support for flags, nested commands, and positional arguments.
The following are a subset of the usage examples that are available at the okectl repository. For further examples, head over there..
Example - Usage
./okectl
usage: OKECTL [<flags>] <command> [<args> ...]
A command-line application for configuring Oracle OKE (Container Engine for Kubernetes.)
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
--configDir=".okectl" Path where output files are created - e.g. kubeconfig file.
--version Show application version.
Commands:
help [<command>...]
Show help.
createOkeCluster --vcnId=VCNID --compartmentId=COMPARTMENTID --subnet1Id=SUBNET1ID --subnet2Id=SUBNET2ID --subnet3Id=SUBNET3ID [<flags>]
Create new OKE Kubernetes cluster.
deleteOkeCluster --clusterId=CLUSTERID
Delete OKE Kubernetes cluster.
getOkeNodePool [<flags>]
Get cluster, node poool, and node details for a specified node pool.
createOkeKubeconfig --clusterId=CLUSTERID
Create kubeconfig authentication artefact for kubectl.
Example - Create Kubernetes Cluster
Interactive Help
./okectl createOkeCluster --help
usage: OKECTL createOkeCluster --vcnId=VCNID --compartmentId=COMPARTMENTID --subnet1Id=SUBNET1ID --subnet2Id=SUBNET2ID --subnet3Id=SUBNET3ID [<flags>]
Create new OKE Kubernetes cluster.
Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
--configDir=".okectl" Path where output files are created - e.g. kubeconfig file. Specify as absolute path.
--version Show application version.
--vcnId=VCNID OCI VCN-Id where cluster will be created.
--compartmentId=COMPARTMENTID OCI Compartment-Id where cluster will be created.
--subnet1Id=SUBNET1ID Cluster Control Plane LB Subnet 1.
--subnet2Id=SUBNET2ID Cluster Control Plane LB Subnet 2.
--subnet3Id=SUBNET3ID Worker Node Subnet 1.
--subnet4Id=SUBNET4ID Worker Node Subnet 2.
--subnet5Id=SUBNET5ID Worker Node Subnet 3.
--clusterName="dev-oke-001" Kubernetes cluster name.
--kubeVersion="v1.10.3" Kubernetes cluster version.
--nodeImageName="Oracle-Linux-7.4" OS image used for Worker Node(s).
--nodeShape="VM.Standard1.1" CPU/RAM allocated to Worker Node(s).
--nodeSshKey=NODESSHKEY SSH key to provision to Worker Node(s) for remote access.
--quantityWkrSubnets=1 Number of subnets used to host Worker Node(s).
--quantityPerSubnet=1 Number of Worker Nodes per subnet.
--waitNodesActive="false" If waitNodesActive=all, wait & return when all nodes in the pool are active.
If waitNodesActive=any, wait & return when any of the nodes in the pool are active.
If waitNodesActive=false, no wait & return when the node pool is active.
Create Cluster
./okectl createOkeCluster \
--clusterName=OKE-Cluster-001 \
--kubernetesVersion=v1.10.3 \
--vcnId=ocid1.vcn.oc1.iad.aaaaaaaamg7tqzjpxbbibev7lhp3bhgtcmgkbbrxr7td4if5qa64bbekdxqa \
--compartmentId=ocid1.compartment.oc1..aaaaaaaa2id6dilongtlxxmufoeunasaxuv76xxcb4ewxcxxxw5eba \
--quantityWkrSubnets=1 \
--quantityPerSubnet=1 \
--subnet1Id=ocid1.subnet.oc1.iad.aaaaaaaagq5apzuwr2qnianczzie4ffo6t46rcjehnsyoymiuunxaauq7y7a \
--subnet2Id=ocid1.subnet.oc1.iad.aaaaaaaadxr6zl4jpmcaxd4izzlvbyq2pqss3pmotx6dnusmh3ijorrpbhva \
--subnet3Id=ocid1.subnet.oc1.iad.aaaaaaaabf6k3ufcjdsdb5xfzzc3ayplhpip2jxtnaqvfcpakxt3bhmhecxa \
--nodeImageName=Oracle-Linux-7.4 \
--nodeShape=VM.Standard1.1 \
--nodeSshKey="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDsHX7RR0z+JSAf+5nfTO9kS4Y6HV2pPXoXTqUJH..." \
--waitNodesActive="all"
For the above createOkeCluster
request, okectl will provision:
- Kubernetes Cluster (Control Plane) - Version will be as nominated via the
--kubeVersion
flag. - Node Pool - Node Pool will be created across the number of worker subnets as provided via
--quantityWkrSubnets
flag. - Nodes - Worker nodes will be provisioned to each of the nominated worker subnets. Number of worker nodes per subnet is determined by the
--quantityPerSubnet
flag. - Configuration Data - Provision to local filesystem a kubeconfig authentication artefact (kubeconfig) & json description of cluster configuration (nodeconfig.json).
Per the flag --waitNodesActive="all", okectl will return when cluster, node pool, and each of the nodes in the node pool are active.
Once completed, okectl will output the cluster, nodepool and node configuration data (stdout):
OKECTL :: Create Cluster :: Complete ...
--------------------------------------------------------------------------------------
{
"id": "ocid1.nodepool.oc1.iad.aaaaaaaaae3tonjqgftdiyrxha2gczrtgu3winbtgbsdszjqmnrdeodegu2t",
"compartmentId": "ocid1.compartment.oc1..aaaaaaaa2id6dilongtl6fmufoeunasaxuv76b6cb4ewxcw4juafe55w5eba",
"clusterId": "ocid1.cluster.oc1.iad.aaaaaaaaae2tgnlbmzrtknjygrrwmobsmvrwgnrsmnqtmzjygc2domtbgmyt",
"name": "oke-dev-001",
"kubernetesVersion": "v1.10.3",
"nodeImageId": "ocid1.image.oc1.iad.aaaaaaaajlw3xfie2t5t52uegyhiq2npx7bqyu4uvi2zyu3w3mqayc2bxmaa",
"nodeImageName": "Oracle-Linux-7.4",
"nodeShape": "VM.Standard1.1",
"initialNodeLabels": [],
"sshPublicKey": "",
"quantityPerSubnet": 1,
"subnetIds": [
"ocid1.subnet.oc1.iad.aaaaaaaajvfrxxawuwhvxnjliox7gzibonafqcyjkdozwie7q5po7qbawl4a"
],
"nodes": [
{
"id": "ocid1.instance.oc1.iad.abuwcljtayee6h7ttavqngewglsbe3b6my3n2eoqawhttgtswsu66lrjgi4q",
"name": "oke-c2domtbgmyt-nrdeodegu2t-soxdncj6x5a-0",
"availabilityDomain": "Ppri:US-ASHBURN-AD-3",
"subnetId": "ocid1.subnet.oc1.iad.aaaaaaaattodyph6wco6cmusyza4kyz3naftwf6yjzvog5h2g6oxdncj6x5a",
"nodePoolId": "ocid1.nodepool.oc1.iad.aaaaaaaaae3tonjqgftdiyrxha2gczrtgu3winbtgbsdszjqmnrdeodegu2t",
"publicIp": "100.211.162.17",
"nodeError": null,
"lifecycleState": "UPDATING",
"lifecycleDetails": "waiting for running compute instance"
}
]
}
By default, okectl will create a sub-directory named ".okectl" within the same directory as the okectl binary. okectl will create x2 files within the ".okectl" directory:
kubeconfig
- This file contains authentication and cluster connection information. It should be used with thekubectl
command-line utility to access and configure the cluster.nodepool.json
- This file contains a detailed output of the cluster and node pool configuration in json format.
Output directory is configurable via the --configDir
flag. Path provided to --configDir
should be provided as an absolute path.
All clusters created using okectl will be provisioned with the additional options of the Kubernetes dashboard & Helm/Tiller as installed.
Conclusion
Head over to the okectl repository for further information on accessing a cluster, and performing cluster operations using kubectl
via CLI or accessing the Kubernetes dashboard.
Cover Photo by Paweł Czerwiński on Unsplash.