Archive for the ‘802.11n’ Category

Education Technology at BETT 2014: Wireless as a Service, 802.11ac and Social Wi-Fi

March 14th, 2014

This is the first post is from Zara Marklew, EMEA channel manager, who is the newest addition to the AirTight team in Europe. Connect with Zara on LinkedIn and Twitter.  Welcome, Zara!

BETT 2014, UK’s learning technology show, has been and gone, but it certainly won’t be forgotten! For those in the educational technology sector, be it primary school teachers all the way to network managers of colleges and large secondary schools, this was THE event and is memorable for new technology and aching feet from over 4 days of the conference.

Wireless as as Service for Education

So what was all the fuss about and why was #BETT2014 trending on the social feeds? There were a few noticeable trends this year noted by attendees and exhibitors alike. Firstly came “XXX as a service”! As educational funding changes, so does the need to adapt and service the new legislation whilst still enabling the educational IT needs in what is a constantly evolving technology landscape.

Cloud Wi-Fi as a service

Cloud Wi-Fi as a service

One that stuck out was North Pallant Cloud and their WISE (Wireless as a Service for education). Using secure cloud managed Wi-Fi provided by the Airtight Networks, they are offering an OPEX model that fits the new financial structures of schools and enables secure Wi-Fi to be deployed. This allows school administrators to deliver all the new applications to their teachers and students whilst getting the assurance that they are securing the network and enabling simple management and deployment for the heads of IT.

Verdict on 802.11ac in Education Is Still Out

There wasn’t as much talk of the new Wi-Fi standard, 802.11ac, from attendees, which was a complete opposite from most Wi-Fi vendors at the show. General opinion was that schools don’t have the money to invest in .ac devices as yet and that in reality, a robust, well designed, deployed .n network will still allowing streaming video to all students. Most of those I spoke to saw 802.11ac as a shrewd plan by Wi-Fi vendors’ marketers to increase sales as opposed to being offered groundbreaking technology.

I’m pretty sure that this opinion will be more divided come BETT 2015 as most schools appear to be first or second generation Wi-Fi users and the realization that security and management of what has now become the primary application delivery network in education, Wi-Fi, is actually pretty important! To that extent, the newer breed of Wi-Fi vendors, those who are cloud managed, were the vendors to see, with the common message that the controller is dead. Its not often you see vendors “buddying up” but that was the common messaging.

Social Wi-Fi: Fit for Education?

The advent of social Wi-Fi seems to have split opinion in terms of its value to the educational sector. The majority of IT people saw no value, whereby a sizable number of principals I spoke with saw it as an invaluable way of promoting the school and being seen as technically “with it.” Competition is fierce amongst schools and social Wi-Fi is a way of engaging with the students, who according to several people “go zombie” as soon as the class is done and lurch around hallways, seemingly sucked into their mobile device. What better way to tell them of events, news and the like than by communicating on the students’ preferred platforms, Twitter and Facebook. We’ll see over the coming months which way this goes, but it is here to stay in one format or another.

With Wi-Fi now running smart boards, tablets, laptops, desktops, telephony, cameras, entry systems, in fact pretty much everything, 2014 looks like being memorable for those in the business and those looking to utilize the technology.

A gold star and a certificate of merit for all exhibitors and attendees and roll on BETT 2015, my lanyard is ready!

802.11ac, 802.11n, Education

Corner Cases

February 26th, 2014

Most Wi-Fi manufacturer’s marketing departments would have you believe that 99% of all deployments are what I’d call “corner cases.” I call B.S. (as usual).

Here are the high-density/high-throughput (HDHT) corner cases that so many manufacturers would have you believe are so prevalent:

  • Large K-12 and University libraries, cafeterias, lecture halls, and auditoriums
  • Stadium or gymnasium bowls
  • Large entertainment venues (e.g. music and theater halls, night clubs)
  • Trade shows
  • Urban hotspots
  • Airports

Combined, these use cases comprise less than 1% of all Wi-Fi installations.  In other words, the opposite of what many marketing departments would have you believe. Let’s look at this from another angle. Here’s a list of use cases that do NOT fall into the category of HDHT, but may have other technical challenges or requirements, yet these same marketing departments want customers to believe they are HDHT environments.

  • K-12 classrooms*
  • Malls
  • Majority of airports

* Note: Some folks believe that one AP per classroom (or even one AP per two classrooms) is a bad idea due to adjacent channel interference (ACI) or co-channel interference (CCI), but that’s a design matter based on a long list of design criteria that can include wall construction materials, AP output power, client device type, client device output power, and MUCH more. I assert that one AP per one (or two) classrooms is a good network design in many K-12 environments, and this usually means less than 35 devices per classroom, worst case. 35-70 devices per AP (2 radios) does not constitute high-density, but may necessitate good L1, L2 QoS, and L7 handling.

Consider all of the common deployments that constitute the majority of WLAN environments:

  • Office environments
  • Warehouses
  • Manufacturing
  • Hospitals
  • Distributed healthcare facilities
  • Cafes
  • Bookstores
  • Hotels

So if HDHT handling isn’t a big deal in 99% of the use cases, what is important? If you ask that question to those same vendor’s marketing departments, they would say Performance! Once again, I call B.S.

After speaking with a variety of network administrators and managers, I’ve found it very difficult to find anyone who can produce statistics showing an AP sustaining more than 10Mbps over the course of an 8-hour business day. Even the peak throughput on the busiest APs aren’t all that high (a couple of hundred Mbps sustained only for a couple of minutes while large files are being transferred). It’s been my experience that busy branch offices, with a single AP serving 50-60 people, is where you find the most sustained WLAN traffic over a single AP.

If 10Mbps is considered “a very busy AP”, and decent 2×2:2 802.11n APs can sustain 200+Mbps of throughput across two radios given the right RF and client environment, then why is everyone talking about performance? I hear vendors bragging about their 3×3:3 11ac APs being capable of 900+Mbps of throughput under optimal conditions. While that kind of throughput is sexy to IT geeks who think that “too much is never enough”, most customers just want it to work. At 200-400 Mbps of throughput for 802.11n APs, why do we care so much about buying premium-priced 11ac APs again?

What do we get out of those 11ac APs anyway? 256QAM is useful only at short range and only for 11ac clients. TxBF is only good at mid-range, and only for thoses client that support it, which is basically none. Rate-over-range is better for uplink transmissions, but if you’re designing for capacity, voice, or RTLS, then this is of no consequence. There may be slightly fewer retransmissions due to better radio quality, but that’s mostly “who cares” also. Bottom line: don’t upgrade your 11n infrastructure for the purpose of speed. If speed (e.g. rate-over-range and raw throughput) is your goal, spend your budget on refreshing your 11ac clients first.

Customers who rush out to buy the latest, greatest, fastest AP end up paying a big price premium for a performance gain that they’ll never, ever, ever, ever use. It’s just silly. They get duped by the marketing message that HDHT handling and ultra high-performance matter in 99% of use cases, when in fact it matters in <1% of the real world use cases. Wi-Fi infrastructure technology is progressing quickly, and the PHY/MAC layers are so far ahead of typical use cases that customers should be focused on correct Layer-2 design and receiving value above Layer-2:

  • Robust, global, cloud management and services option
  • Strong security, compliance and reporting
  • Device tracking / location services
  • Social media integration (i.e. Facebook, LinkedIn, Twitter)
  • Guest and retail analytics
  • Managed services enablement

If you’re going to buy (or upgrade to) an 11ac infrastructure, there’s a very important reason to do it that is unrelated to the speed at which you move frames across the air: intelligence. Some APs don’t have the horsepower to do any significant local processing, and that leaves three options related to infrastructure intelligence:

1) don’t have any
2) send everything to the cloud
3) send everything to a controller

I prefer that APs have enough oomph to get the job done if that’s the optimal place to do the work. There are times when using the cloud makes sense (distributed, analytics), there are times when using the AP makes sense (application visibility/control), and there are times when using a controller makes sense (2008-2009). #CouldntResist

I’ll summarize all of this by asking that prospective customers of Wi-Fi infrastructure remember that they will likely never use even a small fraction of the throughput capabilities of an AP. What will have a significant impact is Wi-Fi system cost, Wi-Fi system architecture, and network design. Don’t get duped by the loud, obnoxious marketing hype around the speed/throughput. Think twice, buy once.


802.11ac, 802.11n, Best practices, WLAN planning

Wi-Fi Speeds-n-Feeds Are So Old-school

October 30th, 2013

Speed-n-feeds are not the future of enterprise Wi-Fi. Speed-n-feeds are like my grandmother’s potatoes. I’ll explain.


Speed is a given. Speed is a commodity. This is what you talk about when you have nothing else to offer, such as system intelligence. Some companies keep trying to rehash the speeds-n-feeds story like my grandmother used to treat potatoes. First, you’d have baked potatoes. If you didn’t eat all of them, the next night, you’d have mashed potatoes – made from those same potatoes, of course. If there happened to be any left-overs after that, you’d have fried potato cakes the next night. Believe me, the list of how those potatoes could be served was endless until those potatoes were gone. Same potatoes, different day.

With 802.11ac, networks have plenty of speed. In fact, I will bet you a cold beer that far more bandwidth goes unused on a daily, hourly, and per-minute basis than is ever used – even with an 802.11n network. You would be hard-pressed to find an AP in any enterprise environment that exceeds an average of 10% of its throughput capability when averaged over a normal 8-hour work day. If you find even one, please give me a shout with statistical proof. I’m interested to see that data.

There is Wi-Fi bandwidth to spare within most enterprise networks that have 802.11n or 802.11ac technology. The bottleneck isn’t in the radio capability, the architecture, the channel airtime, the AP CPU power, or even the available spectrum. The only exceptions are major interference sources, which can affect channel airtime and available spectrum, but that’s off-topic because you don’t design capacity around interference sources.

If there’s an actual speed bottleneck anywhere, it’s found in the battery life of mobile devices, which causes the use of single spatial stream (1SS) radios. Truth be told, even mobile devices using 1SS are fast, ranging from 65 to 433 Mbps, with about half of this data rate being considered usable throughput.

If you have some kind of crazy high-density scenario, sure, you could potentially run into an airtime bottleneck on specific APs at specific times, but that’s a sub-one percent use case issue in most enterprises, and buying a entire solution around a sub-one percent use case seems silly given the price multiple that you are apt to pay.

Most vendors are still building wave-1 802.11ac APs, and the industry is already talking about forthcoming wave-2 technologies, which are more than a year away, highlighting the marketing frenzy around performance. Look, I’m not against performance, but I’m saying that it’s far from the most important consideration.  In fact, at this point, it’s pretty far down the list.

Well, since there’s plenty of speed, what aspects are more important?

  • Solutions focused on vertical markets
  • MSP (managed service provider) enablement
  • Simplicity and intuitiveness of use and deployment
  • Sales model and process: capex, opex
  • System security, redundancy and stability
  • System maintenance, monitoring and compliance reporting

Those are just a few, but they are enough to point out that all of the hoopla around “just speed” is off-target. Almost any enterprise vendor can now provide reasonable connectivity, and they have the reference customers to prove it. Therefore, winning in the market is no longer just about connectivity, but rather about solving customer problems by providing a focused solution.

You like apples? ;)

/Image via For an uncommon twist on potato cakes, get the recipe: Orange ricotta sweet potato pancakes 

802.11ac, 802.11n, WiFi Access

Bang for the buck with explicit beam forming in 802.11ac

October 16th, 2013


Bang for the buck with explicit beam forming in 802.11ac

802.11ac has brought with it MIMO alphabet soup … spatial streams, space-time streams, explicit beam forming, CSD, MU-MIMO. Alphabet soup triggers questions to which curious mind seeks answers. This post is an attempt to explore some questions surrounding explicit beam forming (E-BF) that is available in Wave-1 of 802.11ac. E-BF is a mechanism to manipulate transmissions on multiple antennas to facilitate SNR boosting at the target client.

How is E-BF related to spatial streams?

E-BF is a technique different from spatial streams. E-BF can be used whenever there are multiple antennas on the transmitter, irrespective of the number of spatial streams used for transmission.

In the transmission using multiple spatial streams, distinct data streams are modulated on signals transmitted from distinct antennas. The signals from different antennas get mixed up in the wireless medium after they are transmitted from the antennas. The receiver uses signal processing techniques to segregate distinct data streams from the mixture. The ability to separate these distinct data streams from the mixture is dependent on the channel conditions between the transmitter and the receiver (to be able to isolate `S’ streams at the receiver, the channel matrix needs to have rank of `S’ or more). There is no channel dependent processing of signal at the transmitter. Receiver performance is channel dependent. Some key points regarding multiple spatial streams (spatial multiplexing) are:

  • To support `S’-stream transmission, both AP and client must have at least`S’ antennas
  • Rich scattering environment (e.g., indoor) is conducive to give high ranked channel matrix
  • There is no need to send channel feedback from the receiver to the transmitter.

In the transmission using E-BF, the spatial streams are pre-processed (and pre-mixed) to match them to the channel characteristics from the transmitter to the receiver and the output of the pre-processor is transmitted from different antennas. Feedback from the receiver about the  channel characteristics is used in pre-processing. For practical implementation called ZF (Zero Forcing) receiver, E-BF causes SNR boosting for the spatial streams at the receiver. Some key points regarding E-BF are:

  • Feasible with multiple antennas on the AP, irrespective of the number of spatial streams
  • Affects SNR of spatial streams at the receiver
  • Requires channel dependent pre-processing of signal at the transmitter
  • Requires feedback on channel characteristics from the receiver to the transmitter
  • Does not require multiple antennas at the receiver.
When is E-BF truly beneficial?

In general, E-BF is truly beneficial when the number of spatial streams in use is less than the number of antennas on the AP. In Wi-Fi, this most commonly happens when the client has less number of antennas than the AP. For example, most smartphones and tablets have only 1 antenna on them.

Stream vs beam tradeoff:

For the example of 3-stream AP and 3-stream client, adding E-BF on top of 3-stream transmission may not give significant benefit. This is because, with E-BF different spatial streams typically experience unequal boost in SNR. The SNR can be significantly boosted for some spatial streams with E-BF. On the flip side, there will usually also be some spatial streams for which the SNR boost is not significant. Or, it could even be degradation of SNR for some spatial streams compared to the case when E-BF is not used. To be precise, the SNR boost for each spatial stream is dictated by the corresponding singular value of the channel matrix and the singular values of the channel matrix range from high to low for practical channels (E-BF is based on technique called Singular Value Decomposition or SVD). Couple this with the fact that practical implementations use the same MCS on all spatial streams. So, this means either using the MCS supportable by the weakest SNR spatial stream for all spatial streams or using high MCS for the strong streams and dropping the weak streams. There is excellent explanation of this tradeoff  in the book by Perahia and Stacey in Chapter 13 (if you are up for reading some math!).

However, if 3-stream AP can support only 1-stream transmission to the client, E-BF can give significant gain. This is commonly the case with smart devices which have only 1 antenna on them. In this case, the single stream will most likely get boosted in SNR and there isn’t another stream to counter the SNR boost.

How much overhead does E-BF feedback cause on wireless bandwidth?

E-BF requires feedback from the receiver to the transmitter about the channel characteristics. In order to trigger this feedback, the transmitter sends sounding packet to the client. Client performs channel measurements on the sounding packet and responds to the AP with the channel feedback. A question that often comes up in E-BF is how much of a wireless link overhead the E-BF feedback report would cause. To answer that question, take a look at this spreadsheet. From this spreadsheet, it appears that the feedback overhead is relatively small (only about 0.1% of airtime), particularly for the case where E-BF is going to be most beneficial, e.g., 3 or 4-antenna AP talking to 1-antenna client.

All factors considered, E-BF appears to provide benefits for smartphones and tablets, which typically have only 1 antenna and hence cannot support multiple spatial streams when connected to the 3 or 4-stream AP. On the other hand, when there are multiple antennas on both sides of the link (such as a 3 or 4-stream laptop connected to the 3 or 4-stream AP), spatial multiplexing without E-BF can be as good as one with E-BF. These are the inferences drawn from MIMO principles and it would be interesting to see if they match up with measurements from the practical Wave-1 environments.

Earlier Posts in 802.11 Network Engineering Series:


802.11ac, 802.11n, WiFi Access , , ,

Hunting down the cost factors in the cloud Wi-Fi management plane

October 3rd, 2013


Mature cloud Wi-Fi offerings have gone through few phases already. They started with bare-bones device configuration from the cloud console and over the years matured into meaty management plane for complete Wi-Fi access, security and complementary services in the cloud.

CostAlongside these phases of evolution, optimizing the cost of operation of the cloud backend has always been important consideration. It is critical for cloud operators and Managed Service Providers (MSPs). This cost dictates what end users pay for cloud Wi-Fi services and whether attractive pricing models (like AirTight’s Opex-only model) can be viable in the long run. It is also important to the bottom line of the cloud operator/MSP.

Posed with the cost question, one would impulsively say that cost is driven by the capacity in terms of number of APs that can be managed from a staple of compute resource in the cloud. That is an important cost contributor, but not the only one!


What do the cost models from cloud operation reveal?


We have monitored cloud backend operation costs for past several years. Based on that data, we have built some cost models. These models have led to the discovery of factors that are significant cost contributors. Identifying the cost component is a major step towards reducing it. The cost reduction is often implemented by the combination of technology and process innovations.


Draining the cost out of cloud



This one is no brainer for anyone with head in the cloud. Scalability generally refers to number of APs that can be managed with a unit of compute resource. Higher scalability helps reduce the cost. Enough said.


As the customers of diverse scales (10 APs to 10,000 APs) are deployed in the cloud and at diverse paces, it often results into unused capacity holes in the provisioned compute resources. The capacity holes are undesirable, because the cloud operator or MSP has to pay for them, but they don’t get utilized towards managing end user devices.

The unused capacity problem needs to be solved at two points in time: Initial provisioning and re-provisioning. Clearly, when new customers are deployed, you try to fit them in the right sized capacity buckets. Assuming they love your product, they will then deploy more and start to outgrow their capacity buckets (but you also cannot over-provision, else there will be capacity hole from the beginning). This is the re-provisioning time. At that time, the cloud architecture and processes need to be able to seamlessly migrate customers to bigger capacity buckets.


The very reason customers have chosen to go with cloud is because they want plug-n-play experience. As such, the patience level of the cloud customer is often lower than the one choosing the onsite deployment option. This necessitates higher level of plug-n-play experience to avoid support calls.

There are various points in the life cycle that have high tendency to generate support calls.  One point in time is when devices connect to the cloud, or let’s say, not able to connect to cloud. Another critical time is during software upgrades. The issues also often arise during re-provisioning as discussed above when customers are migrated between compute resources. The cost of attending to support calls can be a significant factor if these experiences are not super smooth. Additional complexities arise when APs are sold through channel, but cloud is operated by vendor or another MSP.

The pricing logic behind reducing personnel cost at MSP is as follows. The end user is eliminating the onsite personnel cost by migrating to cloud, and hence paying less on TCO basis. When the experience is not smooth, this cost is transferred to the personnel at the cloud operator or MSP. The cloud operator and MSP cannot make money if they pick up significant part of this cost on their head.

Latent Resources

Certain features such as high availability and disaster recovery have potential to give rise to latent resources. Latent resources are different from capacity holes discussed before. Latent resources are like insurance in that they don’t get utilized most of the time, but they need to be maintained in great shape. Brute force implementation of these redundancy features has been found to be significant cost contributor to cloud operation.

For any cloud services platform, the above pain points are exposed after years of operational experience and teething pain with diverse customer deployments. That is why, it would be appropriate to say that there are two parts to the viable cloud operation – one is the computing technology that enables complete management features and the other is operational maturity. You overlook any one of them and the cloud can become unviable for operator/MSP and customers in the long term.


Additional references:

Wireless Field Day 5, AirTight Cloud Architecture video

Aruba Debuts Bare-Bones Cloud WLAN at Network Computing by Lee Badman

Next generation cloud-based Wi-Fi management plane

Controller Wi-Fi, controller-less Wi-Fi, cloud Wi-Fi: What does it mean to the end user?

AirTight is Making Enterprise Wi-Fi Fun Again

Different Shades of Cloud Wi-Fi: Rebranded, Activated, Managed


802.11ac, 802.11n, Cloud computing, WLAN networks , , , , , ,

Get Soaked in the Future of Wi-Fi

September 5th, 2013


AirTight Networks is armed with Wi-Fi of the future, and blasting the message out through social media.


Have you ever noticed that there always seems to be a disconnect in the Wi-Fi industry whereby vendors build and sell their products based on hardware capabilities, tech specs, and geeky feature sets while customers ultimately evaluate products based on how the solution fits with their organizational objectives? That’s a problem.


The Wi-Fi market is on the cusp of a second-wind of tremendous growth that will be driven by focusing product solutions on the tailored needs of customers in every vertical market.  However, this is a departure from the status-quo as historically the Wi-Fi market has grown by pushing products (not solutions) based on the latest hardware enhancements and improvements in speed that have come with each iteration of the 802.11 standard. But that model is breaking down as the technology matures, and hardware differentiation alone is very minimal. And customers are demanding more tailored solutions as their own markets evolve into a mobile-enabled workforce and customer experience.

|WiFuture tweet


What’s exciting is that AirTight is already delivering Wi-Fi of the Future (#WiFuture if you’re following along on Twitter). We provide tailored solutions that include social Wi-Fi integration that enable retailers to engage consumers and provide enhanced customer service, presence and location analytics to understand and adapt to customer behavior in-store, and the most robust wireless security solution on the market to secure data well beyond basic PCI compliance requirements. And that’s only the beginning.


AirTight is building solutions that enable the Wi-Fi of the future through:


A Software-Centric Approach – leveraging the rich data analytics available through an intelligent access network, and software defined radios that enable flexibility of hardware use for client access, security monitoring, and performance analysis.


|Intuitive User Experiencemaking Wi-Fi simpler to deploy and troubleshoot so the network isn’t broken or under-performing.


Operational Expense Model – enabling customers to acquire the latest solutions without breaking the budget.


Mature Cloud – that is truly elastic with both public cloud and private cloud options, enabling easy expansion to meet growing network demands without causing unnecessary retooling or plumbing of the existing network. Mature cloud offering also enables the coming wave of Managed Service Providers (MSPs) who will serve the mid-market.


A Culture of Listening – to customers, partners, and industry experts in various industries so that we understand the business drivers for technology solutions and ensure we build products that deliver on those needs.



@AirTight: Soaking the Industry in the Future of Wi-Fi!


WiFuture SuperSoaker|

We are also building an incredible team of industry experts to blast this vision to the market through social media.  AirTight is armed with Super Social experts, kind of like those old Super Soaker water gun blasters we all loved from a few decades ago (has it been that long already!). The tank is full of energy and innovation, and the social media team at AirTight is at the trigger!


So, are you ready to blast away your Wi-Fi woes? Don’t get stuck on the wrong side, soaked and wet in yesterday’s technology.




802.11ac, 802.11n, WiFi Access, Wireless security, WLAN networks ,

11 Commandments of Wi-Fi Decision Making

September 4th, 2013


Are you considering new Wi-Fi deployment or upgrade of legacy system? Then you should be prepared to navigate the maze of multiple decision factors given that Wi-Fi bake-offs increasingly require multi-faceted evaluation.


Follow these 11 “C”ommandments to navigate the Wi-Fi decision tree:


  1. Cost

  2. Wi-Fi CommandmentsComplexity

  3. Coverage

  4. Capacity

  5. Capabilities

  6. Channels

  7. Clients

  8. Cloud

  9. Controller

  10. 11aC, and last but not least …

  11. seCurity!


|hemant C tweet


1) Cost:


Cost consideration entails both “price and pricing” nuances. Price is the size of the dent to the budget and everyone likes it to be as small as possible. Pricing is the manner in which that dent is made – painful or less painful (I don’t think it can ever be painless!). One aspect of pricing is the CAPEX/OPEX angle. Other aspects such as licensing, front loaded versus back loaded, maintenance fees etc. have been around for a long time, so I won’t drill into details of those other than to say that they exist and need to be considered. Enough said on cost.


2) Complexity:


Complexity consideration spans deployment, configuration and ongoing maintenance. One pitfall to avoid is to “like complexity in the lab and then repent it in the production”. Too many knobs to turn and tune, excessive configuration flexibility and exotic features are some of the things that can add to complexity. That said, complexity considerations cannot swing to the point of being simplistic. Rather, the balanced approach is to look for solutions that have mastered complexity to extract simplicity to meet your needs (borrowing from Don Norman’s terminology here).


3) Coverage:


When you hear terms like neg 55, neg 60, neg 65, you know people are reconciling coverage expectations to the number of access points. There’s no explanation needed for how important the coverage is for your wireless network, but the important factor is that the coverage determines the number of access points needed to cover the physical area. At the planning stage, RF predictive planning comes in handy to estimate the coverage BOM (a site survey can complement it for sample areas during the evaluation stage).


4) Capacity:


While coverage determines how far, capacity determines how many or how much. Capacity determines how small or large cells can be. Using practical models for Wi-Fi usage, capacity objectives can be set and network design can be evaluated against these factors. Capacity also determines the number of access points needed to provide the desired capacity in the physical area. RF predictive planning tools can be invaluable during the evaluation phase for capacity estimation.


5) Capabilities:


By capabilities, I mean feature set. This is one of the most important aspects because this is where you ask the question: “Will the Wi-Fi serve the needs of the business?” This is very industry specific. Some features are extremely critical for one vertical, but won’t even be noticed in others. So, it’s important to identify both the features you care about and also those for which you don’t.  Once identified, you move on to thoroughly evaluate the ones you care about.


6) Channels:


One aspect of channels is making decision on how the RF network will be provisioned along the lines of 2.4 GHz and 5 GHz operation. There are advantages to 5 GHz operation, but 2.4 GHz is not EOL yet. How applications are split between the two bands determines the number and type of radios required in the design. Tools and techniques that are needed to plan, monitor and adapt to the dynamic RF environment are also an important consideration.


7) Clients:


Much of what is achievable in Wi-Fi network depends upon the capabilities of the client devices that will access the wireless network. One set of considerations is mainly around the radio capabilities of clients such as 2.4 GHz/5 GHz operation, number of radio streams, implementation of newer standards in clients etc. Another set of considerations revolves around the applications they run and the traffic profile these applications generate. Yet another set of considerations centers around the level of mobility of the clients. BYOD is another consideration that has become important in the the clients arena.


8) Cloud or 9) Controller:


Today, we see pure cloud architecture, pure controller architecture and also architectures confused between the two concepts. While vendors and experts spar over which is the right architecture for today’s and tomorrow’s Wi-Fi, evaluators should focus on comparing them based on their derived value. It is also important to understand what cloud and controller concepts actually mean from the data, control and management plane perspective. Cloud and controller are distinct ways of organizing overall Wi-Fi solution functionality.


10) 11aC:


Making judicious decisions on “what to deploy today or whether to upgrade now” is a tricky one. There are many views around it. One reason is because of how the features of 802.11ac are split between Wave-1 and Wave-2. It is also important to note that immediate 802.11ac benefits are application and vertical specific. Several practical network engineering considerations exist beyond the casual description of the new 802.11ac speeds that are often marketed. So, listen to vendors, listen to business needs, listen to experts, analyze yourself, and in the end, do what is the best for your environment and situation. Speed is nice IF it can be leveraged in practice!


11) SeCurity:


Any information system sans security is worse than worthless – especially today. That said, level of security required by the wireless environment depends on factors such as the value of information at risk, compliance requirements and enterprise security policies. Desired security level determines the right mix of data inline security (encryption, authentication) and security from unmanaged devices (WIPS). Talking of WIPS, the biggest red flags to watch for are trigger happy solutions that generate false alarms, boast long list of ”popcorn” alerts and require excessive manual involvement in the security process.

letter spoonfull|

My hope is that these “C”ommandments will help serve as guidelines in your Wi-Fi decision making process. You can follow them in any order you like to ensure holistic evaluation of options before you. Every vendor, big or small, has sweet spots on some dimensions and not so sweet spots on others. So, despite what they tell you, nobody scores all A’s on all C’s. Hence one has to work on the evaluation criteria until the palatable scorecard is achieved consistent with requirements and budget.


Additional References:


802.11ac, 802.11n, Best practices, WiFi Access, WLAN networks, WLAN planning , , , , ,

AirTight is Making Enterprise Wi-Fi Fun Again

August 19th, 2013


Anyone who knows me knows that I’m always looking way ahead, and it’s my opinion that AirTight Networks is uniquely positioned to take advantage of a major confluence of forthcoming Wi-Fi market changes and requirements. With

1) a scalable, plug-n-play, API-enabled, elastic cloud,

2) controller-less technology,

3) innovative and industry-leading security offerings, and

4) cost-effective, high-performance, feature-rich access points,

no other vendor is as well-positioned to take on managed services, plug-n-play enterprise Wi-Fi, and a wide variety of cloud services.

The need for uncompromising, flexible, and robust security (without the complexity that’s normally associated with it) has become a top-of-mind issue, and AirTight is the unmistakable leader in this area.


Why are so many enterprise Wi-Fi networks so broken and under-performing? The exact list is long, but the #1 reason, far and away, is that they are too complicated.

  • Complicated - Simple sign  Too complicated to learn.
  •   Too complicated to design.
  •   Too complicated to configure.
  •   Too complicated to deploy.
  •   Too complicated to monitor.
  •   Too complicated to upgrade.
  •   Too complicated to optimize.
  •   Too complicated to troubleshoot.


Who doesn’t constantly ask that their Wi-Fi system be more simple to deal with?  Come on, you know I’m right. Don’t even think about arguing with me on this one. I’ve posed this question to so many customers, VADs, VARs, and consultants that I’ve lost count. Too complicated usually means broken in one way or another.  How many single-AP Apple Airport networks do you come across that are completely messed up?  What… 0.000001%?  Why?  There’s hardly anything to misconfigure, and what little that is in the configuration interface is so intuitive that my mother could figure it out.

In a manner of speaking, I want my enterprise Wi-Fi to be much the same way (too easy to screw up), and of course, it should “just work”.

My friend Bradley Chambers likes to call it the 7S model:


 Simple - easy to design, configure, deploy, use, and troubleshoot

 funky orange wifi symbolSocial - integrated social media

 Smart - intelligent, cooperative network edge and cloud management system

 Secure - this applies to the cloud management and the product security features

 Scalable - unlimited is the only acceptable description

 Stable - adequate testing is done before product release, and it “just works”

 Sensible - cost effective and reasonably priced


Since most enterprise Wi-Fi networks are overly complicated, people make mistakes in the design, deployment, configuration, and so on.  Network managers do not have the time (and rarely the inclination) to spend most of their day messing with the Wi-Fi network. They have other things to do.

It’s easy enough to say “we simplify”, but to be honest, everyone has a different definition of what the word simple means. Simple is relative. I think that making a Wi-Fi system simple falls short. As a vendor, I think you know when you’ve arrived when your customers find your system downright fun. Fun to configure. Fun to monitor. Fun to upgrade.  And of course if something goes south, fun to troubleshoot.


Just imagine the scene…


“Hey Mike, we need to deploy three more SSIDs today.”

 ”Sweet! (fist pump)”


No more flailing about in Wi-Fi UI hell. Experience the end of Wi-Fi as we know it. With its cloud-based simplicity, security, and automation,  AirTight is making Enterprise Wi-Fi fun again.

And, by the way, AirTight just launched a free AP trial for those of you tired of complex WLAN solutions.  Experience secure, cloud-managed Wi-Fi for yourself. 


AirTight: Wi-Fi that loves you back. 



802.11ac, 802.11n, WLAN networks, WLAN planning ,

Pleading the fifth at Wireless Field Day 5

August 15th, 2013


AirTight R&D and support teams, based in Pune (India), tune in live to watch WFD5.

AirTight R&D and support teams, based in Pune (India), tune in live to watch WFD5.


It’s not often that you get a group of Wi-Fi independent thought leaders together in the same room.  Last week, we had the privilege to address such a group at Wireless Field Day 5 (WFD5).  This was the first time that AirTight presented at the semi-annual event.  We’re hoping to be invited to the next one in February.


AirTight Networks to Make its Live Tech Field Day Debut at Wireless Field Day 5 in Silicon Valley


What made this event all the more interesting is that our session was streamed live over the Internet. An AirTight video archive was then created and can easily be referenced at any time from the Tech Field Day site.  In fact, all vendor presentations can be found here.

|The AirTight session started off with a welcome by CEO David King.  His talk provided a view into the richness and depth of the wireless industry and even included a reference to Dilbert Wi-Fi.  Following are a few tweets that reflect the sentiment around David’s introductory remarks.


wirelessguru tweet

Keith R Parsons tweet





Stephen Foskett tweet


Next came a demonstration of the AirTight user interface and ease-of-use by Dr. Kaustubh Phanse, principal wireless architect and chief evangelist, and an analytics and social Wi-Fi demonstration by Sean Blanton, senior systems engineer.

These two demonstrations were then followed by a technical presentation on the AirTight cloud-based Wi-Fi management plane by Dr. Hemant Chaskar, VP of technology and innovation.  For more depth around what differentiates a cloud-based Wi-Fi management plane from traditional architectures, you’ll want to read this @CHemantC  blog post.


AirTight WFD5 picture archive by Jennifer Huber


There was no shortage of questions for each of the presenters and it seems that the candor of AirTight answers was well received. The WFD5 delegates waste no time in voicing their opinions.  This blog post by Blake Krone was published a few minutes after the final ‘innovation’ presentation by CTO Pravin Bhagwat.  Ryan Adzima later published a post titled NMS UI and the product managers that hate us.




Each WFD5 delegate was given an AirTight C55 AP to test drive via AirTight’s cloud service.  If you’d like to experience AirTight cloud Wi-Fi for yourself, request your free AP today.

|Free AP Banner


Enquiring Minds Want To Know …


The delegates are a very social bunch and their questions and comments light up Twitter as the sessions progressed.  Their curiosity knows no bounds and it seems that nothing is off limits. There were even tweets asking about the meaning of the tattoo on Sean Blanton’s forearm.  If AirTight is invited to WFD6, Sean might be convinced to show them the one on his back …

And while we’re on the topic of questions, we’re wondering whatever happened to the AirTight “5”? We think that Lee Badman knows … but he’s pleading the fifth.


More on WFD5 and the complicated world without wires



802.11n, WiFi Access, WLAN networks, WLAN planning ,

The WIPS Detective

August 13th, 2013


With the ever increasing importance of Wi-Fi as the de facto access technology, WIPS plays a key role in overall enterprise network infrastructure security.


wips detective with listThe U.S. Department of Defense (DoD) recently created a separate category for wireless intrusion detection/prevention in its approved product listing for deployments in defense agencies.

Gartner now recommends including WIPS as critical requirement in all new RFPs for wireless technologies.

Drivers for WIPS such as PCI compliance for retailers and BYOD for enterprises are compelling.

Secure Wi-Fi is also seen as medium to increase efficiency of government and public services. UK courts recently announced a program to install secure Wi-Fi in 500 court rooms. WIPS is required to make Wi-Fi secure.


Evaluating any information security solution has always been difficult due to the comprehensive coverage of tests required to fully validate the solution. Though there is no substitute for thorough testing, there are some obvious clues which indicate the level of security and operational feasibility of a particular WIPS solution.  As long as you know where to look …  The WIPS Detective reviews some of the tell tale signs starting with Rogue AP protection.  Other signs are addressed in subsequent posts.


Rogue AP Protection


Rogue AP protection – protection from unmanaged APs connected to the enterprise network – is one of the most critical features of WIPS.

If you are deploying WIPS, then solid Rogue AP protection is the first thing you want out of it. Rogue AP protection is also one of the most important requirements for wireless PCI DSS compliance. While certain types of Rogue APs are trivial to detect, certain others are extremely difficult to detect. Also, there are many caveats to workflow for Rogue AP protection in large enterprise networks.

To the extent these aspects are addressed by different solutions, there is a wide spectrum from checkmark to genuine value. Below are some simple clues that help gauge the level of rogue protection obtained from a specific WIPS solution.


Clue #1: Automatic Rogue Containment


Some WIPS systems show a legal warning when you attempt to activate automatic rogue protection.


Cisco WLC-Fluke aWIPS verion 7.4

Cisco WLC-Fluke aWIPS verion 7.4


WIPS detective red flagThis means that “rogue on wire” detection is false alarm prone.  In other words, the system can incorrectly tag friendly neighborhood APs as rogues on wire (called “false positive”). With that possibility, it is impossible to automate rogue containment, since the user would otherwise be taking the liability of neighbor disruption on his head. Seriously, how many users would feel comfortable proceeding after reading this legal disclaimer?  

Accordingly, possibility of any false positive (there isn’t any leeway here) = automatic containment not practical due to liability of neighbor disruption.


Clue #2: Rogue Detection via Wired / Wireless MAC Relation


The most primitive rogue connectivity detection is to look for numerical relation (numerical neighborhood of 2 and 64 are common) between APs’ wired and wireless MAC addresses.  In fact, many run-of-the-mill WIPS actually do that to get their rogue detection checkmark in the product with the least amount of depth.

|Rogue detection via wired _ wireless MAC relation


WIPS detective red flagSaying that WIPS detects rogues on the wire using MAC relations is the same as saying that it fails to detect rogue APs which do not possess any relationship between their wired and wireless MAC addresses.  When it is known that some configurations of rogue APs are outside of the system’s scope for network connectivity detection, the entire neighbor AP list is suspect.

It is like old classic game of minesweeper where every unturned tile is a suspect. Playing minesweeper is fun, but manually examining thousands of APs to ensure that there is no undetected rogue among them is not fun!

 In short, partial “rogue on wire” detection (called false negative) = mountain of manual work to ensure there is no undetected rogue and high risk of lapses.


The 2 clues outlined above illustrate that the writing is on the wall and reflect on the level of robustness of the underlying security platform - in a particular for a WIPS solution. I will cover many more of these tell tale clues in this rolling blog series. Stay tuned.


Additional Information:


802.11n, Best practices, PCI, WiFi Access, Wireless security, WLAN networks