I recently created a new repository (https://github.com/Cybermaxs/Toppler) and I would like to share with you the idea behind this project.
It's been officially one year since I’ve discovered Redis and like every fan I can see many possibilities here and there. I’m also quite surprised that this DB is so little known in Microsoft stack but I’m sure this will change in a few months as Redis Cache will become the default caching service in Azure. But Redis is not just a cache! This Key-value Store has unique features and possibilities. Give it a chance.
So what is Toppler? It’s just a very small package built on the top of
StackExchange.Redis that helps you to count hits and get emitted events to build Rankings/Leaderboard.
Here are a few use cases where Toppler could help you:
- You want a counter for various events (an item is viewed, a game is played, … ) and get statistics about emitted events for custom time ranges (today, last week, this month, …)
- You want to implement a leaderboard with single or incremental updates.
- You want to track events in custom dimensions and get statistics for one, two, .. or all dimensions
- You want to provide basic recommendations by combining most emitted events and random items for custom time ranges
How Does It Work?
One of the most important aspects in Toppler is the Granularity. Each hit is stored in a range of sorted sets representing different granularities of time (e.g. seconds, minutes, …).
The granularity class has 3 properties (
TTL) that allow to compose a smart key following this pattern
[PREFIX]:[GRAN_NAME]:[ TS_ROUNDED_BY_FACTOR_AND_SIZE]:[TS_ROUNDED_BY_FACTOR] where
[PREFIX] is the combination of the configured namespace with the current dimension and
[TS_ROUNDED_XX] is the rounded unix timestamp for a given granularity.
Here are the values for the 4 default Granularities:
| ||Factor ||TTL ||Size |
|Second ||1 ||7200 ||3600 |
|Minute ||60 ||172800 ||1440 |
|Hour ||3600 ||1209600 ||168 |
|Day ||86400 ||63113880 ||365 |
TTL is assigned to each key (using Redis EXPIREAT) to keep a reasonable DB space usage.
So, a hit emitted at 17/07/2014 14:23:18 (UTC) will create/update these keys:
When an event is emitted, the number of hits (often 1) is added to the target Sorted Set via the ZINCRBY command.
The retrieval of results use the same logic to recompose keys as the granularity and resolution are parameters of the
Ranking method, but we use the ZUNIONSTORE command to combine all results in a single sorted set. This allows to store the result of the query (like a cache) or to apply a weight function.
Show Me the Code !
It’s just a very basic example and many additional options are available to emit events (when, in which dimension, how many hits …) or compute statistics (single/multi dimensions, caching, granularity & resolution, weight function …).
The project is currently in Beta so please be indulgent and patient; Feel free to contact me, create issues, tell me what’s wrong … Thanks.