ML on Kubeflow - Part 3 (End): Model Serving

less than 1 minute read

Published:

On the previous post we look at how to train an ML model on Kubeflow cluster. Having the trained model, it’s time to serve requests.

First, go to the serving directory.

cd $WORKING_DIR/serving/GCS

The Kubeflow manifests in this directory contain a TensorFlow Serving implementation. We simply need to point the component to the GCS bucket where the model is located. Afterwards, a server will be prepared to handle requests.

Like before, start by setting some parameters to customize the deployment for our use.

First, set the name for the serving component. In this example, we set the name to mnist-service.

kustomize edit add configmap mnist-map-serving --from-literal=name=mnist-service

Second, configure the model location residing at the storage bucket.

kustomize edit add configmap mnist-map-serving --from-literal=modelBasePath=gs://${BUCKET_NAME}/my-model/export

Lastly, update the base config files to use this style of authentication.

sed -i 's/default-editor/kf-user/g' ../**/*

We can now deploy the server to the previous Kubeflow cluster.

kustomize build . | kubectl apply -f -

Here’s the sample output.

configmap/mnist-deploy-config created
configmap/mnist-map-serving-2f49kbb466 created
service/mnist-service created
deployment.extensions/mnist-service created