Memory Management

Detecting Memory Issues Using Unity’s Profiler

Unity’s profiler is primarily geared at analyzing the performance and the resource demands of the various types of assets in your game. Yet the profiler is equally useful for digging into the memory-related behavior of your C# code – even of external .NET/Mono assemblies that don’t reference UnityEngine.dll.

Unity memory profiler window

This is to allow you to see if you have any memory leaks stemming from your C# code. Even if you don’t use any scripts, the ‘Used’ size of the heap grows and contracts continuously. As soon as you do use scripts, you need a way to see where allocations occur, and the CPU profiler provides that info.

Unity CPU profiler window

 

 C# Memory Management

Automatic Memory Management. This feature had been built deeply into the C# language and was an integral part of its philosophy. Memory management is a problematic issue that cannot be simply entrusted to the common language runtime (CLR).

Your ability to manage memory, or more precisely how memory is allocated, in Unity / .NET is limited.  You get to choose whether your custom data structures are class (always allocated on the heap) or struct (allocated on the stack unless they are contained within a class), and that’s it. If you want more magical powers, you must use C#’s unsafekeyword. But unsafe code is just unverifiable code, meaning that it won’t run in the Unity Web Player and probably some other target platforms. For this and other reasons, don’t use unsafe. Because of the above-mentioned limits of the stack, and because C# arrays are just syntactic sugar for System.Array(which is a class), you cannot and should not avoid automatic heap allocation. What you should avoid are unnecessary heap allocations.

Your powers are equally limited when it comes to deallocation. Actually, the only process that can deallocate heap objects is the GC, and its workings are shielded from you. What you can influence is when the last reference to any of your objects on the heap goes out of scope, because the GC cannot touch them before that. This limited power turns out to have huge practical relevance, because periodic garbage collection (which you cannot suppress) tends to be very fast when there is nothing to deallocate.

Each use of foreach creates an enumerator object – an instance of the System.Collections.IEnumerator interface – behind the scenes. But does it create this object on the stack or on the heap? That turns out to be an excellent question, because both are actually possible. Most importantly, almost all of the collection types in the System.Collections.Generic namespace (List<T>, Dictionary<K, V>, LinkedList<T>, etc.) are smart enough to return a struct from from their implementation of GetEnumerator()). This includes the version of the collections that ships with Mono 2.6.5 (as used by Unity).

So should you avoid foreach loops?

  • Don’t use them in C# code that you allow Unity to compile for you.
  • Do use them to iterate over the standard generic collections (List<T> etc.) in C# code that you compile yourself with a recent compiler. Visual Studio as well as the free .NET Framework SDK are fine, and I assume (but haven’t verified) that the one that comes with the latest versions of Mono and MonoDevelop is fine as well.

Should you avoid closures and LINQ?

You probably know that C# offers anonymous methods and lambda expressions (which are almost but not quite identical to each other). You can create them with the delegate keyword and the => operator, respectively. They are often a handy tool, and they are hard to avoid if you want to use certain library functions (such as List<T>.Sort()) or LINQ.

Do anonymous methods and lambdas cause memory leaks? The answer is: it depends. The C# compiler actually has two very different ways of handling them. To understand the difference, consider the following small chunk of code:

int result = 0;
    
void Update()
{
    for (int i = 0; i &lt; 100; i++)
    {
        System.Func&lt;int, int&gt; myFunc = (p) =&gt; p * p;
        result += myFunc(i);
    }
}

Coroutines

If you launch a coroutine via StartCoroutine(),  you implicitly allocate both an instance of Unity’sCoroutine class (21 Bytes on my system) and an Enumerator (16 Bytes). Importantly, no allocation occurs when the coroutine yield's or resumes, so all you have to do to avoid a memory leak is to limit calls to StartCoroutine() while the game is running.

Strings

No overview of memory issues in C# and Unity would be complete without mentioning strings. From a memory standpoint, strings are strange because they are both heap-allocated and immutable. When you concatenate two strings (be they variables or string-constants) as in:

void Update()
{
    string string1 = "Two";
    string string2 = "One" + string1 + "Three";
}

the runtime has to allocate at least one new string object that contains the result. In String.Concat() this is done efficiently via an external method called FastAllocateString(), but there is no way of getting around the heap allocation (40 Bytes on my system in the example above). If you need to modify or concatenate strings at runtime, use System.Text.StringBuilder.

Boxing

Sometimes, data has to be moved between the stack and the heap. For example, when you format a string as in:

string result = string.Format("{0} = {1}", 5, 5.0f);

… you are calling a method with the following signature:

public static string Format(
  string format,
  params Object[] args
)

In other words, the integer “5” and the floating-point number “5.0f” have to be cast to System.Objectwhen Format() is called. But Object is a reference type whereas the other two are value types. C# therefore has to allocate memory on the heap, copy the values to the heap, and hand Format() a reference to the newly created int and float objects. This process is called boxing, and its counterpartunboxing.

This behavior may not be a problem with String.Format() because you expect it to allocate heap memory anway (for the new string). But boxing can also show up at less expected locations. A notorious example occurs when you want to implement the equality operator==” for your home-made value types (for example, a struct that represents a complex number). Read all about how to avoid hidden boxing in such cases here.

Now we also want to avoid unnecessary deallocations, so that while our game is running, the garbage collector (GC) doesn’t create those ugly drops in frames-per-second. Object pooling is ideal for this purpose. 

Object Pooling

The idea behind object pooling is extremely simple. Instead of creating new objects with the new operator and allowing them to become garbage later, we store used objects in a pool and reuse them as soon as they’re needed again. The single most important feature of the pool – really the essence of the object-pooling design pattern – is to allow us to acquire a ‘new’ object while concealing whether it’s really new or recycled. This pattern can be realized in a few lines of code:

public class ObjectPool&lt;<strong>T</strong>&gt; where <strong>T</strong> : <strong>class</strong>, new()
{
    private Stack&lt;<strong>T</strong>&gt; m_objectStack = new Stack&lt;<strong>T</strong>&gt;();

    public <strong>T</strong> New()
    {
        return (m_objectStack.Count == 0) ? new <strong>T</strong>() : m_objectStack.Pop();
    }

    public void Store(<strong>T</strong> t)
    {
        m_objectStack.Push(t);
    }
}

Simple, yes, but a perfectly good realization of the core pattern. (If you’re confused by the “where T...” part, it is explained below.) To use this class, you have to replace allocations that make use of the new operator, such as here…

void Update()
{
    MyClass m = new MyClass();
}

… with paired calls to New() and Store():

ObjectPool&lt;<strong>MyClass</strong>&gt; poolOfMyClass = new ObjectPool&lt;<strong>MyClass</strong>&gt;();

void Update()
{
    MyClass m = poolOfMyClass.New();

    // do stuff...

    poolOfMyClass.Store(m);
}

This is annoying because you’ll need to remember to call Store(), and do so at the right place. Unfortunately, there is no general way to simplify this usage pattern further because neither theObjectPool class nor the C# compiler can know when your object has gone out of scope. Well, actually, there is one way – it is called automatic memory managment via garbage collection, and it’s shortcomings are the reason you’re reading these lines in the first place! That said, in some fortunate situations, you can use a pattern explaind under “A pool with collective reset” at the end of this article. There, all your calls to Store() are replaced by a single call to a ResetAll() method.

Functionality Requirements

  • Many types of objects need to be ‘reset’ in some way before they can be reused. At a minimum, all member variables may be set to their default state. This can be handled transparently by the pool, rather than by the user. When and how to reset is a matter of design that relates to the following two distinctions.
    • Resetting can be eager (i.e., executed at the time of storage) or lazy (executed right before the object is reused).
    • Resetting can be managed by the pool (i.e., transparently to the class that is being pooled) or by the class (transparently to the person who is declaring the pool object).
  • In the example above, the object pool ‘poolOfMyClass‘ had to be declared explicitly with class-level scope. Obviously, a new such pool would have to be declared for each new type of resource (My2ndClass etc.). Alternatively, it is possible to have the ObjectPool class create and manage all these pools transparently to the user.
  • Several object-pooling libraries you find out there aspire to manage very heterogeneous kinds of scarce resources (memory, database connections, game objects, external assets etc.). This tends to boost the complexity of the object pooling code, as the logic behind handling such diverse resources varies a great deal.
  • Some types of resources (e.g., database connections) are so scarce that the pool needs to enforce an upper limit and offer a safe way of failing to allocate a new/recycled object.
  • If objects in the pool are used in large numbers at relatively ‘rare’ moments, we may want the pool to have the ability to shrink (either automatically or on-demand).
  • Finally, the pool can be shared by several threads, in which case it would have to be thread-safe.

Which of these are worth implementing? Your answer may differ from mine, but allow me to explain my own preferences.

  • Yes, the ability to ‘reset’ is a must-have. But, as you will see below, there is no point in choosing between having the reset logic handled by the pool or by the managed class. You are likely to need both, and the code below will show you one version for each case.
  • Unity imposes limitations on your multi-threading – basically, you can have worker threads in addition to the main game thread, but only the latter is allowed to make calls into the Unity API. In my experience, this means that we can get away with separate object pools for all our threads, and can thus delete ‘support for multi-threading’ from our list of requirements.
  • Personally, I don’t mind too much having to declare a new pool for each type of object I want to pool. The alternative means using the singleton pattern: you let your ObjectPool class create new pools as needed and store them in a dictionary of pools, which is itself stored in a static variable. To get this to work safely, you’d have to make your ObjectPool class multi-threaded. ( I would avoid multi threaded pooling solutions due to being likely unsafe).
  • In line with the scope of this three-part blog, I’m only interested in pools that deal with one type of scarce resource: memory. Pools for other kinds of resources are important, too, but they’re just not within the scope of this post. This really narrows down the remaining requirements.
    • The pools presented here do not impose a maximum size. If your game uses too much memory, you are in trouble anyway, and it’s not the object pool’s business to fix this problem.
    • By the same token, we can assume that no other process is currently waiting for you to release your memory as soon as possible. This means that resetting can be lazy, and that the pool doesn’t have to offer the ability to shrink.

A basic pool with initialization and reset

Our revised ObjectPool<T> class looks as follows:

public class ObjectPool&lt;<strong>T</strong>&gt; where <strong>T</strong> : <strong>class</strong>, new()
{
    private Stack&lt;<strong>T</strong>&gt; m_objectStack;

    private Action&lt;<strong>T</strong>&gt; m_resetAction;
    private Action&lt;<strong>T</strong>&gt; m_onetimeInitAction;

    public ObjectPool(int initialBufferSize, Action&lt;<strong>T</strong>&gt;
        ResetAction = null, Action&lt;<strong>T</strong>&gt; OnetimeInitAction = null)
    {
        m_objectStack = new Stack&lt;<strong>T</strong>&gt;(initialBufferSize);
        m_resetAction = ResetAction;
        m_onetimeInitAction = OnetimeInitAction;
    }

    public <strong>T</strong> New()
    {
        if (m_objectStack.Count &gt; 0)
        {
            <strong>T</strong> t = m_objectStack.Pop();

            if (m_resetAction != null)
                m_resetAction(t);

            return t;
        }
        else
        {
            <strong>T</strong> t = new <strong>T</strong>();

            if (m_onetimeInitAction != null)
                m_onetimeInitAction(t);

            return t;
        }
    }

    public void Store(<strong>T</strong> obj)
    {
        m_objectStack.Push(obj);
    }
}

This implementation is very simple and straightforward. The parameter ‘T‘ has two constraints that are specified by way of “where T : class, new()“. Firstly, ‘T‘ has to be a class (after all, only reference types need to be object-pooled), and secondly, it must have a parameterless constructor.

The constructor takes your best guess of the maximum number of objects in the pool as a first parameter. The other two parameters are (optional) closures – if given, the first closure will be used to reset a pooled object, while the second initializes a new one. ObjectPool<T> has only two methods besides its constructor, New() and Store(). Because the pool uses a lazy approach, all work happens in New(), where new and recycled objects are either initialized or reset. This is done via two closures that can optionally be passed to the constructor. Here is how the pool could be used in a class that derives from MonoBehavior.

class SomeClass : MonoBehaviour
{
    private ObjectPool&lt;<em>List&lt;</em>Vector3<em>&gt;</em>&gt; m_poolOfListOfVector3 =
        new ObjectPool&lt;<em>List&lt;</em>Vector3<em>&gt;</em>&gt;(32,
        (list) =&gt; {
            list.Clear();
        },
        (list) =&gt; {
            list.Capacity = 1024;
        });

    void Update()
    {
        List&lt;<em>Vector3</em>&gt; listVector3 = m_poolOfListOfVector3.New();

        // do stuff

        m_poolOfListOfVector3.Store(listVector3);
    }
}

A pool that lets the managed type reset itself

The basic version of the object pool does what it is supposed to do, but it has one conceptual blemish. It violates the principle of encapsulation insofar as it separates the code for initializing / resetting an object from the definition of the object’s type. This leads to tight coupling, and should be avoided if possible. In the SomeClass example above, there is no real alternative because we cannot go and change the definition of List<T>. However, when you use object pooling for your own types, you may want to have them implement the following simple interface IResetable instead. The corresponding classObjectPoolWithReset<T> can hence be used without specifying any of the two closures as parameters.

public interface IResetable
{
    void Reset();
}

public class ObjectPoolWithReset&lt;<strong>T</strong>&gt; where <strong>T</strong> : <strong>class</strong>, IResetable, new()
{
    private Stack&lt;<strong>T</strong>&gt; m_objectStack;

    private Action&lt;<strong>T</strong>&gt; m_resetAction;
    private Action&lt;<strong>T</strong>&gt; m_onetimeInitAction;

    public ObjectPoolWithReset(int initialBufferSize, Action&lt;<strong>T</strong>&gt;
        ResetAction = null, Action&lt;<strong>T</strong>&gt; OnetimeInitAction = null)
    {
        m_objectStack = new Stack&lt;<strong>T</strong>&gt;(initialBufferSize);
        m_resetAction = ResetAction;
        m_onetimeInitAction = OnetimeInitAction;
    }

    public <strong>T</strong> New()
    {
        if (m_objectStack.Count &gt; 0)
        {
            <strong>T</strong> t = m_objectStack.Pop();

            t.Reset();

            if (m_resetAction != null)
                m_resetAction(t);

            return t;
        }
        else
        {
            <strong>T</strong> t = new <strong>T</strong>();

            if (m_onetimeInitAction != null)
                m_onetimeInitAction(t);

            return t;
        }
    }

    public void Store(<strong>T</strong> obj)
    {
        m_objectStack.Push(obj);
    }
}

A Pool with Collective Reset

Some types of data structures in your game may never persist over a sequence of frames, but get retired at or before the end of each frame. In this case, when we have a well-defined point in time by the end of which all pooled objects can be stored back in the pool, we can rewrite the pool to be both easier to use and significantly more efficient. Let’s look at the code first.

public class ObjectPoolWithCollectiveReset&lt;<strong>T</strong>&gt; where <strong>T</strong> : <strong>class</strong>, new()
{
    private List&lt;<strong>T</strong>&gt; m_objectList;
    private int m_nextAvailableIndex = 0;

    private Action&lt;<strong>T</strong>&gt; m_resetAction;
    private Action&lt;<strong>T</strong>&gt; m_onetimeInitAction;

    public ObjectPoolWithCollectiveReset(int initialBufferSize, Action&lt;<strong>T</strong>&gt;
        ResetAction = null, Action&lt;<strong>T</strong>&gt; OnetimeInitAction = null)
    {
        m_objectList = new List&lt;<strong>T</strong>&gt;(initialBufferSize);
        m_resetAction = ResetAction;
        m_onetimeInitAction = OnetimeInitAction;
    }

    public <strong>T</strong> New()
    {
        if (m_nextAvailableIndex &lt; m_objectList.Count)
        {
            // an allocated object is already available; just reset it
            <strong>T</strong> t = m_objectList[m_nextAvailableIndex];
            m_nextAvailableIndex++;

            if (m_resetAction != null)
                m_resetAction(t);

            return t;
        }
        else
        {
            // no allocated object is available
            <strong>T</strong> t = new <strong>T</strong>();
            m_objectList.Add(t);
            m_nextAvailableIndex++;

            if (m_onetimeInitAction != null)
                m_onetimeInitAction(t);

            return t;
        }
    }

    public void ResetAll()
    {
        m_nextAvailableIndex = 0;
    }
}

The changes to the original ObjectPool<T> class are substantial this time. Regarding the signature of the class, the Store() method is replaced by ResetAll(), which only needs to be called once when all allocated objects should go back into the pool. Inside the class, the Stack<T> has been replaced by aList<T> which keeps references to all allocated objects even while they’re being used. We also keep track of the index of the most recently created-or-released object in the list. In that way, New() knows whether to create a new object or reset an existing one.

Leave a Reply

Your email address will not be published. Required fields are marked *