Quantcast
Channel: MikroTik
Viewing all articles
Browse latest Browse all 15172

General • Re: Nstreme or Nv2

$
0
0
I'm posting to try and glean the optimal Nv2 performance. I've run a network for 7 years with NV2, and made various attempts to tune performance.

In recent months, I've been making various network upgrades, and one enterprise client who is monitoring our circuit very closely is convinced there is still an unacceptable amount of packet loss.

So, to hopefully benefit all, I'd like to post my results, and see if anyone else sees similar improvements.

Here is the basis for the testing done this morning. I seemed to be getting some packet loss. Throughput was ok, but was getting a certain amount of packet loss to the client, with very little throughput going to the clients. When I started I had the QUEUE type to multi-queue-ethernet default on WLAN and ETH on the AP. There seem to be very few controls for NV2. But the Queues and the TDMA period and distance of the cell seem to be all we have for tuning.

Here are the settings I'm using that seem to be working well this morning:

AP settings: all else default (QRT5AC)
AP is AC only 80 mHz, superchannel
Multicast helper full
Multicast buffering checked.

Advanced - HW retries 2
Max station count 15 (currently set, where most other AP's are at 30) not sure this has much of an impact or not.
Data rates - Default
Guard interval any

Nv2
TDMA period 2ms
cell radius 20k
security yes
mode fixed @ 60% downlink
Queue 2
default

Status 6 clients

Have ethernet interface @ multi-queue-ethernet default
have Wireless interface @ Wireless-default, then modified SFQ to perturb 9 with 4000 bytes (tried 2000000 and 11 but it was lossy. Figured 4000 to match NV2 packet size)

Ran send test with 25mbit to one client (dynadishAC) from another core device in the network
Ran simultaneous send test with 20mbit to another client(QRTAC) on the same ap from another core device in the network
Then on the AP, ran ping test with default packet size and 35ms to the client who was experiencing packet loss.
No longer seeing any drops after modifying the SFQ to 9/4000.
Ran the test up to 85 mbit and 20 respectively from the two btest TCP generators (ccr1009 somewhere on my core ptp network) Still no loss.

AP @ 6.48.3 as well as client
Ran test up to 40mb tcp from one ccr
and 80 mb tcp from other ccr.
120 m aggregate to test clients radios (result - no loss! or jitter from AP to client over 35 ms.)

So I ran another test, this time reducing the TX to 65 and increasing the RX for both test to that client 33.252 (the client complaining about packet loss)
Still got 120x5 and no packet loss.

Surprisingly this config seems to work. I've been thinking for years the trick is to match up the TCP needs with the NV2 needs.
Both clients are at 3KM and 2KM with -50 signal.

I've not changed the Queue type at the far side from Multi-queue-ethernet default for both interfaces on the client. I'm really surprised at the results. It seems the more traffic on the interface the less loss there is. I ran test for about 7000 packets and didn't drop a single one to the client receiving 80 mbit, while the other client RX 40M - It's also 9 AM and I didn't do anything to hamper other clients traffic on the cell. I'm really happy with the test results on this AP.

So I'm going to try this config where I have 20-25 clients on the crowded interface. Clients are more at 15-18KM.

Here's the report: 24 clients - ranging from -50 to 65 signal. 80 mHz, 2 retries, same settings as above, except cell range set to 40. Have some 37km clients - AP is Netmetal.
I had to increase the ping to 50ms to get the same low loss. There are 4 clients, 2 of which are really poor, but 4 who have lower than -65 signal fluctuating between -66 and -71. Normally we try to get all clients in -50 range, but cutoff is -65 for the installer.
I did make sure that clients are on later versions of 6.4x, and there were a couple we put on 7 for testing.
Overall, I was able to get similar results as the smaller cell with fewer customers. And I do feel there was an improvement with the Queue type, and lower ms TDMA window @ 2ms. Had it set @ 3 ms for heavy subscriber cells. I've let our guys know to go and re-align the few clients with poor signal, but the other subscribers seemed to be performing well after making the changes detailed.


I'd love to hear if others get the same results, where a loaded cell has very little loss with this config, or if you have improvements to suggest.
Cheers,

Andy
Hi Andy, your post was very helpful in my journey to improve my NV2 cells! How do you do L2/L3/AAA to the clients? do you use PPPoE DHCP, VLAN per client, RADIUS? I'm asking this because i'm trying to experiment with wireless QoS but our network is mainly PPPoE and the encapsulation prevents it from working, so I'm looking into alternatives.

Statistics: Posted by fcerezochalten — Mon Jan 08, 2024 8:29 pm



Viewing all articles
Browse latest Browse all 15172

Trending Articles