Singleton in Golang

Read more articles on Go
Singleton in Golang

Getting started #

In this tutorial, you learn how to use the singleton pattern in Go. We will first learn how to create a singleton, then we will see how a program with multiple goroutines can affect our singleton and how we can solve this problem using some standard Go packages.

Creating singleton #

Let's assume we are developing an application that requires an in-memory cache, we only need one instance of this cache throughout the application. Let's assume this cache needs to have a Get and a Set function, so it needs to fulfill the following interface.

package cache 

// interface our cache needs to fullfill
type MyCache interface {
	Get(key string) string
	Set(key string, value string)
}

To fulfill, this interface let's create a struct called myCache that implements the required functions.

package cache

type MyCache interface {
	Get(key string) string
	Set(key string, value string)
}

// šŸ‘‡ our cache struct
type myCache struct {
}

// the actual implementation of `Get` and `Set` don't matter for this tutorial

func (c myCache) Get(key string) string {
	return "default value"
}

func (c myCache) Set(key, value string) {
}

Notice carefully that the name of myCache starts with a lowercase letter, this is because we don't want anyone to create a new instance of myCache.

Now it's time to write a GetCache function that always returns the same instance of myCache.

// ...continuing from the previous example

var cache *myCache

func GetCache() MyCache {
	if cache == nil {
    	cache = &myCache{}
    }
    
    return *cache
}

Here we are checking if the cache has not already been initialized i.e. cache is nil, then we initialize our cache and eventually return it. Once done we can use it throughout our application.

package main

import (
	"fmt"
    "<module_path>/cache"
)

func main() {
	c := cache.GetCache()

	fmt.Println(c.Get("key"))
}

Congratulations šŸ‘ you have now created a singleton. But hold on, your program is not safe in a concurrent environment šŸ˜±.

Singleton and Concurrency #

The current solution will work fine if it is only accessed/initialized by a single goroutine, but in the case where you have multiple goroutines accessing the singleton at the same time, there can be race conditions and you might end up initializing the cache multiple times.

There could be a case where multiple goroutines arrive at the if cache == nil { line at the same time.

// ... same as previous code

func GetCache() MyCache {
	if cache == nil { // šŸ‘ˆ Multiple goroutines can arrive here at the same time
		cache = &myCache{}
	}

	return *cache
}

What we can do is use sync.Mutex to obtain a lock before we initialize the cache (read more about sync.Mutex here).

// ... same as previous code

var cache *myCache
var mutex sync.Mutex // šŸ‘ˆ defining our mutex

func GetCache() MyCache {
	mutex.Lock() // šŸ‘ˆ goroutines will wait here
	defer mutex.Unlock()

	if cache == nil {
		cache = &myCache{}
	}

	return *cache
}

Why does this work? The lock on a mutex can only be obtained by one goroutine at a given time, if multiple goroutines try to call mutex.Lock() at the same time, only one of them will be able to proceed until mutex.Unlock() is called (which gets called once the GetCache function completes).

Optimizing our solution #

While we have solved the problem of initializing the singleton only once, now every time you call GetCache you have to obtain a mutex lock which is an expensive operation šŸ¤¦ā€ā™‚ļø.

What if we move the mutex.Lock() inside the if condition šŸ¤”?

// ... same as previous code

var cache *myCache
var mutex sync.Mutex

func GetCache() MyCache {
	if cache == nil {
		mutex.Lock() // šŸ‘ˆ moving it inside the if block
		defer mutex.Unlock()
    
		cache = &myCache{}
	}

	return *cache
}

While this solves the problem that once the cache is initialized you will not need to obtain the lock again, we are back to square one where we have the problem of initializing our cache multiple times.

To solve this problem we can add a check to see if the cache has already been initialized after obtaining the lock.

// ... same as previous code

var cache *myCache
var mutex sync.Mutex

func GetCache() MyCache {
	if cache == nil {
		mutex.Lock()
		defer mutex.Unlock()

		if cache == nil { // šŸ‘ˆ additional check
			cache = &myCache{}
		}
	}

	return *cache
}

So now even if we have multiple goroutines obtaining the lock, due to the additional check only one of them will be able to initialize the cache.

This solves our problem completely and is concurrency safe šŸ„³!

Using sync.Once #

Fortunately, we don't have to do all of that hoopla to make our program concurrency safe, Go provides sync.Once to solve this exact problem.

// ... same as previous code

var cache *myCache
var once sync.Once // šŸ‘ˆ defining a new `sync.Once`

func GetCache() MyCache {
	// šŸ‘‡ the function only gets called one
	once.Do(func() {
		cache = &myCache{}
	})

	return *cache
}

The function passed in once.Do only gets called once, even if GetCache gets called 1000s of times, the function inside once.Do will only get executed once and is concurrency safe. I encourage you to read more about sync.Once here.

The complete program would look something like this:

package cache

import "sync"

type MyCache interface {
	Get(key string) string
	Set(key string, value string)
}

type myCache struct {
}

func (c myCache) Get(key string) string {
	return "default value"
}

func (c myCache) Set(key, value string) {
}

var cache *myCache
var once sync.Once

func GetCache() MyCache {
	once.Do(func() {
		cache = &myCache{}
	})

	return *cache
}

Thank you for reading this article šŸ™šŸ» hope you learned something new. Now time for some quiz šŸ“š.

    TheDeveloperCafe Ā© 2022-2024