MikroTik PCQ/Mangle generator


#1

https://sonar.software/generate/mikrotik-mangle-queue

We’re starting to add a variety of tools to the site to help people build configs to use with Sonar. This is the first one available. You can plug in your Sonar address list names, and the speeds you want to rate limit customers to. The section below will generate a configuration you can paste into your MikroTik terminal to implement the rate limiting.

More coming soon!


#2

Just a question on packet mark vs connection mark.

I have mine setup as follows.

/ip firewall mangle
#Download
add chain=prerouting action=mark-connection new-connection-mark=home_basic_connection_down passthrough=yes dst-address-list=Basic
add chain=forward action=mark-packet new-packet-mark=home_basic_traffic_down passthrough=no connection-mark=home_basic_connection_down
#Upload
add chain=prerouting action=mark-connection new-connection-mark=home_basic_connection_up passthrough=yes src-address-list=Basic
add chain=forward action=mark-packet new-packet-mark=home_basic_traffic_up passthrough=no connection-mark=home_basic_connection_up

Your queue generator generates the following

/ip firewall mangle
#Download
add action=mark-packet chain=postrouting dst-address-list=Basic new-packet-mark=Basic_traffic_down passthrough=yes
#Upload
add action=mark-packet chain=prerouting src-address-list=Basic new-packet-mark=Basic_traffic_up passthrough=yes

Isn’t it far more resource intensive to mark every packet vs connection? That is what I have read, but maybe you could shed some light on the subject. As well I noticed I setup mine with prerouting on the download chain and you use postrouting, is this something I am doing wrong? It seems to work well.
Thanks for all you do!


#3

Connection marking doesn’t seem to work in the latest MikroTik firmware, it seems to mark the connection in both directions, so you can’t do separate rates in both directions. Connection mark is better, but I haven’t seen using packet marks to be a huge issue on small/medium networks.

I use prerouting on the download so that it works if you’re running NAT. If you use postrouting, it would match on the public IP rather than the private.


#4

Hate to raise a dead thread, but we’ve hit a brick wall on our CCR1036, using the address list, mangle, and queue tree method. Maximum single client download speed can’t go above ~300-400mbpsTCP ~600-700mbps UDP. Disabling all the mangle rules allows full wireline speeds(2gbps fiber uplink) Even packets/connections that are marked at the very beginning of mangle rules still have a severe drawback.


#5

Are you using the rules from the generator that bind the queues to a specific interface? I would guess trying to go that high per client you will need to increase the buffer sizes a lot as well, but it will take some testing to find the right values.


#6

By “buffer” are you referring to the “bucket size” under Queues / Queue Tree? I’ll follow this hoping that @Michael_Crapse finds a solution. We run a “full throttle” package for our residential customers that is 400 Mbps and we are having a hard time hitting that with the normal queue setup. I am suspecting it is the same issue.


#7

it’s less an issue of buffer size or bucket size, just being queued doesn’t cause the issue, but the mangle rules just destroy single thread performance. as we add more and more packages and more and more subscribers, address lists grow and mangle rules grow as well. We’re mangling how it is shown here, first mark a connection based off of address list, then mark the packets based off the connection mark. tools-> profile show up to 4 cores being slammed to 100% on a normal speedtest.net speedtest, which hovers around 400mbps, even on a 1gig customer’s queue(which we’ve increased buffer sizes as much as possible). turning off a few mangle rules increases performance dramatically, and removing them all keeps our overal cpu usage around 8% which is the percentage of traffic which is flowing compared to the 10g port we hope to turn up in the next month or 2.
@Chad_Wachs try making your address lists smaller or removing some other mangle rules(leaving your 400mbps customer mangle rules alone) and see if that helps you to get to the speeds, if so, we’re having the same issue.


#8

Do you have them bound to a specific interface? Are the queues bound to an interface or global?


#9

We have them on separate interfaces instead of global, and still seeing the aforementioned issue. What we have started to do instead, is have a single address list instead for all of our customers greater than 100 mbps, and have a firewall rule to fast track that address lists, mangling only the slower customers. We also stopped mangling VOIP separately, and combined as many plans as we could(upgrading our customers to slight faster where possible). Reducing the number of mangle rules(different bandwidth plans) have helped reduce our average CPU load for 1gbps of traffic down to 12%(from 100%). We will optimize further using jump rules and having overarching address lists(business vs residential) allowing sort of a binary tree like search and mangle process. As our goal is to be able to push 20gbps of traffic through a ccr1036(which means we’d be able to push 40gbps through a ccr1072) We are trying to build scalable, would you suggest that we’d be able to do this with our current method?


#10

If you’re legitimately looking at doing 10+ Gbps of traffic, I would get a dedicated shaper rather than using a MikroTik, personally. I haven’t tried doing it at any kind of scale like that on MikroTik, but if you’re already struggling, I’d imagine it’s only going to get harder.


#11

thanks for the feedback, we’ll explore the other options.