March 12, 2019Comments are off for this post.

Simply NUC offers Intel® NUC-based BlueFox solution

Simply NUC and BlueFox have teamed up to offer a range of versatile, Intel-based NUCs that can run BlueFox software, along with other applications, in a single solution.

Simply NUC deploys a linux platform with native Docker container support. Any solution that can run in a Docker container, like BlueFox, can use the Simply NUC platform. This means vending machines or POS software can be quickly deployed on the Simply NUC BlueFox solution allowing end-customers to optimize hardware deployment and maintenance.

More information is available on the Simply NUC website

November 16, 2018Comments are off for this post.

BlueFox and BrightSign join forces in the digital signage space

BlueFox has integrated its real-time mobile phone sensing technology into BrightSign portfolio of digital-signage media players.

BlueFox-equipped BrightSign media players can detect nearby mobile phones without the need for any special apps, logins, or beacons. BlueFox for BrightSign increases ROI by helping optimize digital-signage messaging and measuring real-time customer engagement.

The BlueFox solution is ideal for digital-out-of-home advertising, retail, and other digital-signage applications where measuring and understanding customer foot-traffic patterns is important.

Read AVNetwork article about BrightSign announcement

Download the datasheet

October 30, 2018Comments are off for this post.

“Donut” Detection Range

New to the BlueFox reporting suite, we've added the ability to capture additional data outside of the standard inner detection range you've already configured. The outer detection range can be customized to limit the maximum reach of your BlueFox sensor, which by default can be up to 250 feet (~75 meters).

Here’s an example: you operate a retailer in a shopping mall and want to understand how many people are inside but also how many people are passing by the entrance. At the full detection range, people might simply be too far away to know the store is even there at all, but people who are less than 50 feet away might just be close enough to see the advertisements on your front windows and therefore could be influenced to come inside. This data is now available through our API and with custom reporting.

March 7, 2018Comments are off for this post.

The 4 Must-have Real-time Audience Analytics Advertisers Need

The industry’s leading advertising networks make use of insightful audience foot traffic analytics to give them a better understanding of consumers and ad efficiencies. As Yoann El Jaouhari, managing director of JCDecaux Cityscape observes, “We can see a trend whereby DOOH is also being used more creatively, pushing contextual dynamic contents to specific audiences, and also complementing social media campaigns, either through integration or through content creation.”

It’s time for you to level the playing field. Solutions provide real-time audience data at your fingertips, empowering your network and your clients, whether it is out-of-home (OOH) media or digital signages.  So, without further ado, here are the four must-have real-time audience analytics (stick around for a bonus!) that will empower brands and advertising networks to deliver effective and conversion-driving ads:

Getting audience exposure is a common and imperative practice to assess ad efficiency and give advertisers and agencies an overview of foot traffic in ad spaces.

How do we do it? Simple. sensors evaluates traffic by counting smartphones. It has never been that easy to measure audience exposure. We track “real time” and “over time” Opportunity To See (OTS) counts (example: current video wall OTS is 12, total since 8:30 AM is 144)

Location Density
Location density data delves deeper into multi-location advertising campaigns. With location density analytics, advertisers and agencies can determine which advertising campaign location yield the best audience exposure. Examples of location density analytics include viewing hotspots and top traffic clusters to evaluate and test the efficiency of your campaign.

This can enhance the advertising strategy and planning process. Knowing the distribution and audience exposure is great for increasing ad efficiency but we can do better.

Dwell Time
In a recent research, it was found that 55 percent of people who viewed a digital signage could recall the specific message displayed every time they passed one. Audience dwell time shows how long customers stay engaged with the advertising media.

There are many reasons to use dwell times, mainly, to refine test media and generating advertising insights. A/B testing multiple creatives can show you which asset has the highest retention rate. Also, tracking recurring traffic patterns and identifying "premium" advertising hours, days, or season can give you a great level of precision for your campaign management.

Advertising efficiency rates evaluations are greatly improved using dwell time analytics.

Frequency and Cumulative Unique Reach
What’s your playlist strategy? Commercial Integrator says an effective frequency is 4 - 7 times. Measuring impression frequency rates will help advertisers and brands understand the amount of times an individual has been exposed to the ad media. (example: of the 1,234 people who were around our ad over the past 72 hours, 321 of them had never seen it before).

Tracking cumulative impressions and unique reach would reveal the number of new/unique audience that the ad is targeting. This insight is absolutely imperative in helping brands and agencies determine when their ad campaign has hit its saturation point. Also, different messages can be sent to a consumer for each successive exposure to your messages.

Click to download our Digital Signage Handbook

Bonus! The Audience Analytics Sample Report 

Thanks for sticking around, here is the bonus as promised! As part of the suite of solutions, we generate a report of the different audience foot traffic analytics to help advertising networks communicate the campaign results to their clients and brands. On top of the 4 metrics mentioned, there are many more insightful and actionable analytics included in the report, for example, week/month impression breakdown.

While our clients use their real-time audience analytics to measure and optimize their ad efficiency, they are also successful in engaging with their audience through our proximity-based customer engagement solution. Stay tuned for our next article on how our clients have managed to establish and maintain strong customer relationship through proximity-based messaging!



September 7, 2017Comments are off for this post.

InfluxDB: The Good, the Bad, and the Ugly

by Thomas Sandholm, Architect @BlueFox.IO

We discuss lessons learned from scaling our analytics backend using state-of-the-art time-series database technology. InfluxDB has a lot to offer, if used the right way. We take you through some of our observed sweet spots and pitfalls.

We were having some scalability challenges with our existing analytics backend, which comprised a wild combination of Cassandra, Elasticsearch, MySQL, and Redis.

There were issues with disks filling up, databases, and some of the most powerful AWS instance flavors were having performance issues. To add insult to injury, we also needed to scale fast to meet customer demand, without increasing the already astronomical AWS bill.

Cassandra and Elasticsearch are great tools, but for our particular use case they weren’t exactly right for the job. At the very least, they weren’t able to provide the full solution. A lot of time was spent sending data back and forth between the database and our application, so that we could do our custom analytics and then write data back to serve queries.

After a reevaluation of our core features, it became clear that a simple time-series database would get us almost all the functionality we needed, and while still keeping most of the processing within the database server. Enter InfluxDB.

The Good

We are generally very happy with InfluxDB, it’s run in production for six months without any issues. The main benefit is resource efficiency. We can achieve a lot with a very small resource footprint.

It comes as no surprise that time-slot aggregated data, i.e. sums of metrics in hourly and daily buckets, is where InfluxDB shines. This feature, to efficiently aggregate time series with a simple query, was well worth the migration alone.

At a close second comes InfluxDB’s retention policies. As your product matures and the infrastructure scales up with demand, it’s great to have an easy knob to adjust retention of data up or down to avoid the catastrophic disk-full crashes. In essence you create a retention policy, i.e. customer visit frequency, then set how long you want to keep data tagged with this policy. Sounds simple, and it is. Considering the alternative of fiddling with TTL configurations in application code, the usefulness of this feature cannot be overstated.

Another life-saver for us was the into clause feature. You can run a query and write the results directly back into the database without round-tripping to the application client. This used to be a common pattern for us, as I mentioned above. So this feature alone improved our processing pipeline time by an order of magnitude.

The final observed benefit of InfluxDB is what sits under the hood. All the data are sharded to allow for parallelism, and scale-out using the same core storage technology applied in most popular NoSQL databases today (Cassandra, LevelDB, MongoDB, RocksDB), i.e. log-structured merge trees(LSM trees). In InfluxDB, they are aptly called time-structured merge trees. This means that similar scalability designs work well, write speeds are fast, and access to data in the same shard is efficient. Combine this with a schema-less design of your data and you have a winning configuration. You add tables (called measurements) and columns as you go. You only need to create the database, which doesn’t require any schema.

So performance is generally great, but there are definitely lots of opportunities to mess things up along the way, which leads us to…

The Bad

If it’s not designed properly, your performance will at some point start to suffer. As I mentioned, InfluxDB shards all the data akin to other LSM-tree databases, but the default is that a new shard is created each week, and data are kept forever (infinite retention). Depending on your ingest load, the write times will eventually suffer with such a configuration, but being too aggressive in splitting the data in shards, which is done through time durations, could render queries astonishingly slow. So, it’s a tradeoff. The sweet spot is somewhere where the shard is small enough to both make writes fast and serve most queries.

Another concern is that shard configurations have an interesting dependency on retention periods. When data are discarded because of the retention date expiring, the entire shard is lost. Therefore, if shards are too large (let’s say one month), and you have a retention period of one month, it will mean that two months of data have to be kept on the disk.

Another drawback is that the open source version does not (in contrast to tools like Cassandra and Elasticsearch), come with support for distributed deployments (clustering), despite the fact that the underlying database was designed for it. You need to upgrade to the paid InfluxEnterprise or InfluxCloud versions to distribute your database across nodes.

The Ugly

This is a bit of a selfish item, because our use case depends heavily on it:lack of support for the histogram function. This is a great feature and it was available in v0.8 but has been dropped ever since, including the recently released v1.3. It would have saved us lots of headaches, but since we also wanted all the performance improvements of versions 0.9+, downgrading was not an option. We dabbled around a bit with percentiles, but it generated graphs that were hard to understand for our users, and doing a mathematical conversion is not feasible unless you have very smooth distributions and many data points, which in turn kills performance. We ended up doing a somewhat restricted custom solution, but we would still move back to the histogram feature in a heartbeat if it’s ever reintroduced.

The other gripe we have is that we’d like to use the Cloud offering, but it’s backup retention policies are too limited. We don’t want to keep all the data in the live InfluxDB database as it impairs performance (see discussion above on shards and retention policies), and forces us to buy the more expensive live clusters with increased disk space. Instead we want to keep backups of archived data in something like S3, which allows us to do one-off analysis of old data for R&D purposes. Again, we ended up implementing our own solution that does exactly that.

So What’s the Verdict

One of the main lessons learned when scaling our solution was to introduceNagling between our application and InfluxDB. Buffering measurement points on a per-sensor-stream basis, and then writing them in a single batch when the buffer fills up, allowed us to improve write throughput by up to 10x. Because our application containers (in AWS ECS) are stateless, we had to implement buffering with an external persistence service. Originally we used Memcached (appends), but we then switched to Redis (lists), as it was more reliable and had no impact on performance.

Another major performance breakthrough was when we started caching InfluxDB query results in our front-end database cache for semi-structured retrieval of sensor summary statistics. This architecture allowed our customer-facing application UI’s to be performance isolated from the data ingest and analytics processing services.

In summary, despite the tongue-in-cheek buildup towards the “ugly,” we are actually very happy with InfluxDB, for its design, features, performance, and reliability. That said, one can always wish for more!

This article was originally posted on Medium

BlueFox Count app for Social distancing

Manage capacity and prevent overcrowding in any physical space with our Social Distancing solution.
Learn More