Dr Andrew Grundy

About Me

I am an accomplished Lead Software Engineer with a demonstrated history of working in the computer games industry. I am skilled in many programming languages, with a career that has predominantly focused on server side technologies. I am a Lead Software Engineer, working at The Multiplayer Group. I am working with my team to build a matchmaking and game server provisioning solution using Go microservices in Kubernetes.

I have a PhD in Computer Science from University of Nottingham. My Thesis, 'Congestion Control Framework For Delay-Tolerant Communication', was focused on providing a congestion control framework for use in delay and disconnection prone networks and addressed both single copy and replication base message dissemination.

Prior to researching at Nottingham I studied at the University of Leicester, where I obtained a BSc Hons degree in Computer Science. My Dissertation focused on methods for establishing and maintaining a minimum cost coverage set in mobile ad-hoc networks with changing topologies.

Industry Experience

The Multiplayer Group

2020 - Present: Lead Software Engineer

The Multiplayer Group offer Co-Dev, Full-Dev and Analytics services. MPG specialise in creating the highest standard of mind-blowing multiplayer experiences for their partners.

Responsibilities

As a Lead Software Engineer I am charged with technical decision making, sprint planning, holiday approvals, performance reviewing and generally championing my team. I am a code owner on the Google for Games Open Match project, which I contribute to by fixing bugs, reviewing pull requests, aid in the design of new features and responding to questions in the projects Slack workspace.

Matchmaking & Game Server Provisioning

The matchmaking & game server provisioning project comprises of a collection of microservices that are written in Golang and integrate with Open Match, Agones, Knative and Kubernetes.

The services are designed to be platform agnostic and are deployed using Terraform, Terragrunt, Helm and Argo CD.

We use Open Telemetry, Prometheus and Grafana to provide insight into the performance of the services.

We leverage Fluentd, Elasticsearch and Kibana in order to aggregate, search and display the log information output from all of the services that we host.

Lockwood Publishing

2017 - 2020: Senior Server Programmer

Lockwood Publishing are the creators of Avakin Life; their aim is to become the biggest social and mobile gaming company around.

Responsibilities

At Lockwood, I joined an established team with a substantial amount of code already written. My role is focused on software design / development. I work predominently with the Go programming language and with multiple data storage technologies (Postgres, DynamoDB, Redis, Elasticsearch, S3, InfluxDB, Cayley). Below I have listed the projects where I have implemented the server-side code.

Noteworthy Projects

  • Popups service used to inform the client about popups available to be displayed to the user.
  • Events service that was used to provide the client with a list of events that were going to occur in the game.
  • Friend Codes service that provides a way for players to connect via a short alphanumeric code that is linked to their account. This was optimised by using a 2-way hash function (string code input and integer output or visa-versa) and an integer index.
  • Game Stats service that maps a key to a counter, which is used to track a players progression in various scripted mini-games.
  • Game Rewards service was designed to liberate the operations team. Throughout the backend code events are triggered, each transmitting their own preconfigured set of parameters. The operations team can define rules that decide what rewards are given out at each of these points, if any.
  • Daily Bonus service that is used to give players a reward each day to encourage retention, in reality it was a very thin API that called the Game Rewards Service.
  • Fashion Game service for the fashion contest game. The aim of the game is to dress your avatar, set the pose and submit a photograph of your creation into the contest. Players then vote on these entries and rewards are awarded for the rank and starpower (percentage of upvotes) achieved, voting is incentivised too.
  • Locale service to serve locale to the client on demand, replacing the need for the client to download a replacement for all locale, for the entire game, from the CDN.
  • Player Reporting service to deal with players reporting other players, with functionality for player moderation team to manage reports effectively.
  • Username Searching service that maintains a subset of player data in Elasticsearch. Because the game allows unicode usernames some deobfuscation (mapping unicode characters to ascii) is done to help player support / moderation find the player they are looking for.
  • Seasonal Gifting service for a small gifting game that allowed players to gift each other, but these are locked until a given day (eg Christmas), at which point the gifts could be opened.
  • Single Use Token service used by other services. This was initially created so a reward could be sent to a player via a link, the token would be a part of the link and prevent the player being able to reuse the link.
  • In-game Survey service used to ask the players simple multiple choice questions.
  • Community Goals service which allowed all of the players in the game to increment a single counter in order to reach a target.

Kwalee LTD

2014 - 2017: Lead Server Programmer

Kwalee is an expanding, independent, mobile game developer based in Leamington Spa.

Responsibilities

My responsibilities at Kwalee can be grouped into 4 main areas: Game Server Framework Development, Game Specific Development, Infrastructure Development and Team Management, below is a description of each:

Game Server Framework Development

Historically Kwalee used a monolithic .NET (VB and C#) codebase and a MySQL database on their game servers. I was employed to re-engineer this service, the main goals for the new system were: horizontal scaleability and low latency globally, in addition to these operation specific goals I added the following Software Engineering goals to the specification: the service should be RESTful, modular and customisable for game specific needs.

This work required the following technologies: Python, Flask, Flask-RESTful, Flask-WTF, Couchbase, Elasticsearch, Celery.

Game Specific Development

Each game Kwalee creates has elements unique to it, the server has to interact with these components in a meaningful way, as such it did not make sense for these to be developed in a generic way. The requirements gathering for these components were collected from the separate game development teams and focused on providing them with the service they need.

Infrastructure Development

This area of my role involved me to configure the AWS infrastructure and use technologies such as: NGINX, uWSGI and SupervisorD. The NewRelic service provides monitoring and service performance insight.

Team Management

As the lead of the server team I was charged with sprint planning, performance reviewing and interviewing candidates.

Hive Online LTD

2011 - 2014: Server Developer

Hive was a marketing, packaging and technology company who were based in Wymeswold Leicestershire. Hive specialised in increasing frequency of purchase and weight of purchase for fast moving consumer goods (FMCG) brands via proof of purchase marketing campaigns.

Responsibilities

The responsibilities I had at Hive can be grouped into 3 main areas Marketing Platform Framework Development, Client Specific Development and Infrastructure Development, below is a description of each:

Marketing Platform Framework Development

One of my main day-to-day tasks was the maintenance and improvement of 'The Hive Platform'. From joining the company I had been instrumental in modernising the code-base, transitioning from a single monolithic application towards a modular toolchain. This involved substantial refactoring, correcting architectural flaws, removing code smell, lowering coupling and increasing cohesion.

This work required the following technologies: Python, Django, Celery, RabbitMQ, Redis, MySQL, REST, SOAP.

Client Specific Development

Each client project brought its own set of challenges, this was anything from a simple additional signup requirement to not being able to store any personally identifiable data and having to access all of a consumers details via an API. Typically the hive marketing platform was configurable to facilitate a brands marketing requirements, but when something different came along this was built for the client to meet their specification.

This work required the same technologies as the Marketing Platform Framework Development.

Infrastructure Development

This development was concerned with taking Hive's existing hosting solution (Dedicated Rackspace servers) and making it more robust, better structured and responsive to demand (Rackspace Cloud Servers). This required the following technologies: NGINX, uWSGI, Supervisor, RSyslog, Redis, MySQL, rpmbuild, CFEngine3, Pushover, Zabbix, Python, Django, Rackspace pyrax and paramiko.

Noteworthy Projects

Unique Code Algorithm

One of the main issues facing Hive when I started working for them was the storage and data management of the codes they were generating, encrypting and then later cross-referencing. I developed an algorithm for Hive that, through the use of a combination of Information Theory and Cryptographic techniques, allows unique codes to be produced and redeemed without storing individual codes.

REST API Framework

A result of a client request for our code validation API to be able to respond to 50 times more traffic than it was able to process at that time resulted in me implementing a much faster API framework and to re-engineering the code behind the unique code submission process.

System Integration

As a result of the code algorithm work I carried out, Hive's code generation solution has become much more flexible, allowing it to be distributed. The main benefit of allowing code generation to occur in a decentralised way is that I have been able to provide a code generation library for other systems to generate codes, specifically a DLL for use with C# .NET. This work has required the following technologies: C++, C++/CLI, C#, .NET, as well as Python, Django and REST.

Freelance Web Developer

2005 - 2011: Self employed Freelance Web Developer

During my time at University, in order to fund my education, I developed websites for small businesses and worked as a freelance developer. During this time I worked with a number of different languages such as: PHP, Pearl, Java, Javascript, Python, XHTML, CSS, XML, XSLT, MySQL and MS SQL.

Education

2008 - 2011: Ph.D. in Computer Science from the University of Nottingham.

2005 - 2008: BSc Hons in Computer Science from the University of Leicester.

2001 - 2003: BTEC National Diploma in Computing from Leicester College.

Research Experience

Ph.D. Computer Science

2008 - 2011 University of Nottingham

Funding: EPSRC

Supervisors: Dr. Milena Radenkovic and Prof. Uwe Aickelin

Congestion Control Framework For Delay-Tolerant Communications

Detecting and dealing with congestion in delay tolerant networks is an important and challenging problem. Current DTN forwarding algorithms typically direct traffic towards particular nodes in order to maximise delivery ratios and minimise delays, but as traffic demands increase these nodes may become unusable.

This thesis proposes Café, an adaptive congestion aware framework that reduces traffic entering congesting network regions by using alternative paths and dynamically adjusting sending rates, and CafRep, a replication scheme that considers the level of congestion and the forwarding utility of an encounter when dynamically deciding the number of message copies to forward.

Our framework is a fully distributed, localised, adaptive algorithm that evaluates a contact\u2019s next-hop potential by means of a utility comparison of a number of congestion signals, in addition to that contact\u2019s forwarding utility, both from a local and regional perspective. We extensively evaluate our work using two different applications and three real connectivity traces showing that, independent of the network interconnectivity and mobility patterns, our framework outperforms a number of major DTN routing protocols.

Our results show that both Caf\u00e9 and CafRep consistently outperform the state-of-the-art algorithms, in the face of increasing traffic demands. Additionally, with fewer replicated messages, our framework increases success ratio and the number of delivered packets, and reduces the message delay and the number of dropped packets, while keeping node buffer availability high and congesting at a substantially lower rate, demonstrating our framework\u2019s more efficient use of network resources.

Additional Achievements

I have published work at a number of premium conferences and presented my work both at conferences and university group seminars, which has helped me to develop the ability to communicate in a clear manner. During the course of my Ph.D. I have attended a number of training courses including Planning Research Projects, Statistical Analysis and Data Sampling, Statistical Analysis Using R, Introduction to Teaching, Marking and Assessing and Demonstrating in Computer Science Practicals. Since completing my teacher training I have marked exam papers, demonstrated in labs and given a tutorial lecture. In addition to this I have been a paid supervisor of a Masters student, overseen by my supervisor. The Masters student's dissertation focused on self organised security in mobile ad hoc networks, culminating in an award for best Masters project.

BSc Hons Computer Science

2005 - 2008 University of Leicester

Tutor: Professor Reiko Heckel

Supervisor: Professor Thomas Erlebach

Dissertation Synopsis

My BSc Dissertation was a continuation of the Nuffield Foundation funded research work I undertook during the summer of 2007. I developed an algorithm that provided a solution for the wireless ad-hoc network routing backbone problem (a minimum cost coverage set of a weighted graph with changing topology). This centred around the observation that by weighting edges in the network graph as a sum of the cost of the nodes that the edges connected, you could then compute a good minimum cost coverage set approximation by means of a distributed minimum spanning tree algorithm. My implementation utilised threaded programming techniques, illustrated the solution as a graph in a GUI and produced a trace file for statistical analysis.

Taught Modules

Functional Programming, Logic Programming, Object Orientated Programming, Software Engineering, Internet Programming, Theory of Computation, Discrete Mathematics, System Modelling and Design, Compression Methods, Multimedia, Cryptograph and System Security.

Computer Science Research Internship

Summer 2007 University of Leicester

Funding: Nuffield Science Bursary

Supervisor: Professor Thomas Erlebach

Project Synopsis

During the 2 month bursary experience I investigated algorithms for routing backbone construction in wireless ad-hoc networks. After familiarising myself with the relevant literature I implemented two variants of the Wang-Wang-Li (WWL) algorithm and a centralised global greedy approach. I also implemented a graphical user interface for visualising the networks and the computed routing structures.

Code

This page lists my open source projects and contributions.

Open Match (GitHub)

I am a code owner on the Google for Games Open Match project, which I contribute to by fixing bugs, reviewing pull requests, aid in the design of new features and responding to questions in the projects Slack workspace.

om-stream (GitHub) a streaming rework of open-match.

This project is a prototype and I hope, if it yields good results, some of the lessons learned will be adopted by the Open Match project. The original objective of om-stream was to keep the functionality of Open Match but to move away from having singletons like the synchroniser, director and evaluator. The Synchroniser and Director are replaced by using a streaming database and the evaluator is replaced by an atomic database action, these changes should increase scope for capacity and resilience.

A pure Python Bitmap Index (PyPI | GitHub)

I wrote this code because I needed a pure python bitmap index that could serialise to a string. In order to compress the string representation of the binary data I chose run length encoding as it was a good fit for the large, sparse bitmaps I needed to store.