11 Best Practices to Ensure Excellent Quality of Service

For managed services providers (MSPs) to maintain strong customer relationships, it goes without saying they need to consistently provide reliable services. To do this, much of the delivery of said services may rely on bandwidth usage.

Quality of Service (QoS) is the process of maximizing a network’s bandwidth and performance. The technologies involved also give administrators the ability to prioritize certain traffic over others as it passes through an enterprise network. A significant amount of planning and coordination goes into establishing an effective and successful QoS, which is why it’s important you have a comprehensive understanding of how important QoS is to customers.

If you find your network is facing latency and bandwidth issues, the QoS best practices listed in this guide will help you use QoS technologies to achieve better performance.

What is QoS in networking?

QoS is a set of technologies or features used to manage bandwidth usage as data passes across computer networks. This technology is most often used as a means of protecting high-priority, real-time data applications. To create end-to-end network QoS policies, QoS technologies each have specific roles that are used in conjunction with one another.

The two most popular and common QoS methods for managing traffic are queuing and classification. Queuing creates buffers in devices that hold onto the data to be processed. Queues allow for bandwidth reservation and traffic prioritization as traffic goes into or out of a network device. When queues are not emptied at the appropriate time, they drop traffic and overflow. Classification, on the other hand, identifies and marks traffic so network devices know how to prioritize data when it passes across a network.

Two other QoS techniques in widespread use are policing and shaping. Policing and shaping tools limit bandwidth utilization by defining traffic types at an administrative level. Shaping determines a software-set limit on the rate of bandwidth transmission for a certain class of data. When more traffic needs to be sent than the shaped restriction allows for, any excess traffic will be buffered. Policing enforces a specific limit on bandwidth, which means that if applications attempt to use more bandwidth than they are allocated, the traffic will be dropped and re-marked.

The Weighted Random Early Discard (WRED) technology is a queueing discipline that issues a congestion avoidance mechanism that drops TCP data of a lower priority level. WRED does this to prevent congestion from negatively impacting higher-priority data.

Lastly, there are link-specific fragmentation and compression technologies. These are used on lower bandwidth WANs, ensuring real-time applications are not impacted by high jitter and delay.

What type of network traffic requires QoS?

QoS is important for all traffic types, but it’s especially important for the following:

  • Email
  • Online purchasing
  • Voice and video applications
  • Batch applications
  • Interactive applications

All traffic requires QoS but UDP streams require more consideration. These are real-time streams that do not require the overhead needed for TCP streams.

QoS best practices to consider

1. PERFORM A NETWORK ASSESSMENT

Performing a network assessment is a crucial first step because it will inform the development of your subsequent QoS policies. A network assessment will give you valuable insight into the current state of the network and provide a baseline for the type of data being processed, as well as how much. This is the quickest way of identifying congestion areas, misconfigurations, and any other network problems that might affect the effectiveness of your end-to-end QoS deployment. A network assessment might, for example, help you identify outdated hardware that needs to be upgraded.

2. IDENTIFY PRIORITY NETWORK TRAFFIC

Once you’ve performed a network assessment and documented your findings, the next step is to consider which network traffic types are of the highest priority. This will include traffic types that are most important to your business, like protocols that perform dynamic routing activities. You should categorize data flows into specific classes, according to priority level.

3. CATEGORIZE LATENCY-SENSITIVE DATA FLOWS

The next step is to categorize latency-sensitive data flows, including voice and video conferencing. This is also likely to include applications that are critical to the day-to-day operations of your business. Continue this categorization process until you reach data streams the network assessment identified as being inessential. General website surfing, for example, might be placed in the non-essential category.

4. CATEGORIZATION SHOULD INVOLVE BUSINESS LEADERS

This is a fundamental but often overlooked QoS best practice. Although it’s useful to involve network administrators with the categorization process, it’s of critical importance that business leaders drive application categorization. Business leaders will be able to provide insight into which applications are genuinely essential, while network administrators may only be able to speculate.

5. CONSIDER ELIMINATING NON-ESSENTIAL DATA FLOWS

If you discover certain data flows are non-essential, you should remove these data flows entirely. Eliminating this traffic will mean QoS doesn’t need to be used to drop this traffic when congestion occurs. This can alleviate bandwidth constraints without the need for QoS.

6. APPLY QOS CLASSES

Once you’ve broken down your data flows into categories according to importance and latency requirements, you’ll need to assign these applications to one of several classes. A QoS class refers to the policy configuration performed on network routers and switches.

7. LESS IS MORE

You might be inclined, at this stage, to configure an array of QoS classes to meticulously define QoS policies for each data flow type. In this case, however, less is more. One of the reasons QoS management is so complex is because of the sheer amount of time and resources required to maintain each class and its associated policies. The fewer classes you create, the easier the process of deployment and ongoing maintenance will be.

8. APPLY QOS CLASS IDENTIFIERS

It’s best practice to identify and mark network traffic with a specific QoS class identifier as close to the source device as possible. In some instances, the application might be able to tag packets on your behalf, in which case you need to trust in the classification marking process. In other cases, you can configure network access switch ports so they identify data and mark it while it’s egressing the switch. These activities will increase demand on both RAM and processing power. As such, it’s also important that you monitor CPU and memory usage once a QoS deployment is rolled out into production.

9. AVOID POLITICAL HEADACHES

Because QoS requires the prioritization of certain activities over others, you may run into political repercussions within your organization. To minimize the impact of non-technical deployment obstacles, it’s important to address political and organizational issues as early as possible. To avoid disputes, keep communication lines open so everyone is on the same page.

10. REMEMBER QOS IS NOT A ONE-TIME SETUP

It’s important to keep in mind that managing QoS is an ongoing process. It will need to be monitored closely and audited regularly to ensure its functioning properly. You should perform regular network assessments on an annual basis so you can identify any changes in data flows and application usage.

11. IMPLEMENT CHANGES AS NECESSARY

This best practice is like the previous best practice—but also emphasizes not just monitoring but making changes as necessary. When conducting future network assessments, you should use the information acquired to perform network upgrades and re-categorize applications and QoS policies where appropriate. Remember you should think of QoS as fluid, not static.

The best QoS software for MSPs

For a robust all-in-one tool that also serves as a QoS solution, look no further than SolarWinds® Remote Monitoring and Management (RMM). This software features a tool called NetPath, which assists network administrators with enhancing QoS. The NetPath feature uses advanced probing to detect the network path from a source server to a destination service, even when traceroute is unable to do so. This affords you in-depth visibility into critical network paths, whether they are on-premises, off-premises, or in a hybrid IT environment. NetPath helps you troubleshoot hot spots across your complete delivery chain, rapidly and efficiently.

NetPath also delivers advanced performance and QoS monitoring capabilities, notifying you of outages before they impact your users. With NetPath, SolarWinds RMM collects performance metrics and information on network connectivity between source and destination nodes. This gives you insight into the end-to-end performance experienced by a user and alerts you when packet loss and latency thresholds are breached.

SolarWinds RMM also offers the NetPath feature’s node and hop information, an online backup and recovery manager, and much more. RMM is easy to use and features a dynamic dashboard designed to simplify the experience of gathering and monitoring data. A 30-day free trial is available for MSPs interested in learning more.

Want to stay up to date?

Get the latest MSP tips, tricks, and ideas sent to your inbox each week.

Loading form....

If the form does not load in a few seconds, it is probably because your browser is using Tracking Protection. This is either an Ad Blocker plug-in or your browser is in private mode. Please allow tracking on this page to request a trial.

If this issue persists, please visit our Contact Sales page for local phone numbers.

Note: Firefox users may see a shield icon to the left of the URL in the address bar. Click on this to disable tracking protection for this session/site