A typical type of computer system that is used for high performance computing is the NUMA architecture and is build for shared memory parallelization. On this architecture each NUMA domain consisting of cores has its own memory associated with it. To use remotely located memory on another domain they are connected between each other via an interconnect. Striving for data locality by using the memory on the local domains in parallel allows for high memory bandwidth and is achieved by aligning data and computation. In other cases the slower interconnect between them has to be used. Therefore data locality is crucial on this architecture. For shared memory parallelization generally OpenMP is used. OpenMP 3.0 introduced tasking and was added for solving complex or recursive parallel programs. In contrast to work-sharing, where data and computation can easily be aligned there is no way to control data locality with tasking. To solve this a new task to data affinities clause is added in OpenMP 5.0 to support multiple affinities. The problem this thesis addresses is how to help schedule tasks using these multiple affinities. I extended an experimental LLVM runtime to support multiple affinities using different strategies and evaluated it with different benchmarks. While not all applications could benefit from multiple affinities other cases could demonstrated nearly a 2 times speedup on a 8-socket NUMA machine.