Granular Data Usage Math in Customer Portal


I’m using the API to get aggregate data usage for the customer portal:
example usage:

The script returns the data in timestamp chunks, I foreach loop parse them into a per-day arrays and add them all together (because I don’t need that granular of data but this is all I could seem to find to accomplish this specific task)

So my end result is an array that looks like this for each IP address in the account:
->IP Address
->->Date(By Day)
->->->IN total for that day (in bits)
->->->OUT total for that day (in bits)

So my question:
according to the API docs, I’m receiving the data in bits and per IP I get IN and OUT data only (not application data as shown in the docs which is actually better for me anyway)

I will use 2017-10-22 as an example:
->-> Date: 2017-10-22
->->-> IN:400641693
->->-> OUT:26824094

IN = 400641693 bits / 1000000000 (to get gigabits) =0.400641693 Gbits
OUT = 26824094 bits / 1000000000 = 0.026824094 Gbits

According to my math there was incredibly little usage on this account but my routers bandwidth tracker shows a much higher data usage for that day on that specific IP. I’ve tried many other days and other IP’s on the account, all with the same low results.

So where am I going wrong? is it the API returning a different number units then expected ie. Megabit or Megabyte? or not showing all of the accounts netflow data for some other reason?

Thanks in advance for anyone help you can help with this.


The only way the math seems to work is if the number being returned is in Bytes (not Bits) or if for some reason only some of the usage data is being returned.

in the example it shows the type of data =(call of duty) whereas our mikrotik netfow usage tracks it all as ‘unknown’ so for each IP there is only one IN and one OUT:
[data_source_identifier] =>
[data_source_parent] => NetFlow (Core Mikrotik [1])
[data_type] => Unknown
[in] => stdClass Object(…all the timestamps => datausage…)
[out] => stdClass Object(…all the timestamps => datausage…)


I did the math based on the data being kilobits rather than bits and it seemed too small, to look a little better:
IN = 400641693 / 10000000 (two less zeros to get gigabits) =40.0641693 Gbits which seems more realistic

I tied all the IP data together rather than having it broken down and the end result seems to look the most realistic (but I think my math is still incorrect):


I have tried the same thing. I added up the total bits and converted GB(bytes) but the result was way off. Currently I’m using the ‘totals’ in bytes at the bottom of the return array but even this doesn’t appear accurate.
Have you found a better way?


Nope, Im using my fudged numbers but noone seems to really mind (as far as i can tell)
You’re the first person to confirm my hunch that something is wrong.

Can you confirm that its you and not us @simon or let us know what we’re doing wrong?


Use the same timestamp for the start and end, it does a conversion on the backend to the start and end of the day. Got changed internally and not updated in the API docs.


30+ API calls for the month, seems a little excessive.
wishlist -> add a single call that just gives you the daily totals for the time duration. :wink:

That would be great.


Toss it in receptive, we are building the new API now.

For today, I’d just grab them the first time and cache it if I was adding this to the portal. You really only need to grab each day once. Laravel has a built in caching mechanism, it’d be pretty easy. You can even use Guzzle to pool and make 10+ requests at once.


Using the same timestamp for start/end definitely worked, however the total numbers aggregated for the billing period still do not match the total returned by an aggregate call to API, but at this point its close enough.
I too would like to see a daily usage feature in the API or at least a trimmed down version of the granular query…maybe an argument could be sent via the query for ‘long’ or ‘short’ return with ‘short’ sending back only the total_bytes_in/out, free_in/out…and then ‘long’ returns the massive granulated bit data array as well.