+# Solution
+
+In my solution to the backend developer challenge, I've prepared a Flask service
+that implements a web API server, test runner, and API stats storage.
+
+As requested the Flask app stores data in Redis using Docker Compose.
+
+I created the service with the pyproject.toml standard for definition.
+
+After finishing my submission in a single file, I refactored it to separate
+concerns slightly more clearly, with the redis stats client in `redis_client`
+providing stats and testing specific functionality, and the Flask app with its
+route handlers in a `create_app` factory. While these sections do not
+perfectly separate out domain concerns, this has more to do with my unfamiliarity
+in creating a Flask `app` instance by hand in a production ready manner.
+I did attempt to do some basic dependency injection of the redis client, but when
+doing so the runner complained that the factory could not be called
+with 0 arguments, and I returned to just instantiating a singleton of the client
+in the context of one `app` instance.
+
+Additional features that I would include if I were to spend more time on the
+project would be:
+
+1) Implementing configuration for both the Flask `app` as well as the
+`RedisClient`. For the Redis client I'd want the endpoint of the Redis
+instance being used to be configurable, and to include proper auth for
+a production database or cluster.
+
+2) Implement the stats counter as a middleware. While developing I went
+back and forth between how the `/api/*` routes should handle hit counting,
+primarily related to validation. Once I decided to always count a hit,
+even if it fails testing, then I felt that it would be nice to have a
+decorator or middleware to add to automatically hit count every route
+server by the Flask app. I didn't feel like the additional time it would
+take to write that would be worth it when I had a correct submission.
+
+3) Move test running out to a background worker. Regardless of the
+intent of the test runner, since the data is already being persisted in
+the Redis DB, test runners could be parallelized. I _believe_ I used the
+atomic increment operator correctly to facilitate this use-case.
+This would also allow for:
+ - The user to receive feedback quickly on their job being submitted
+ - The application server to free up a thread
+ - Guarantee the test can be run for longer than an HTTP request may take to time out.
+ - It would remove the odd context handler I had to find to modify request context
+ - It would more accurately test the web service from a real client context, allowing scale or other concurrency issues to be found.
+
+4) I would include more test cases, as right now error handling and concerns are a bit interwoven and inconsistent.
+ - The RedisClient uses `zrevrange` even though it doesn't know _why_ it ought to.
+ - The Flask app handles uncaught redis exceptions as it may want to handle presenting exceptions to the client differently.
+ - The validation function is not unit tested, even though it is easily extractable. This is because I felt that the web service level
+ exception logic was too specific to generalize in a helper function at this time.