Archive

Archive for the ‘802.11ac’ Category

Peek Inside 802.11ac Access Point Hardware Designs

March 25th, 2014

There is large and ever increasing assortment of enterprise access points offered by wireless vendors today.  APs have different number of radios, number of streams, 11n/11ac, POE compatibility, peripherals, price, etc. While this diversity is overwhelming, have you wondered what lies in the hardware guts of these APs? What are the hardware design concepts that are responsible for rendering feature personality to the AP? How does the hardware ecosystem work among chip vendors, ODMs and AP vendors? What are state of the art hardware architectures for the 802.11ac APs? This blog post discusses key hardware concepts, such as SoC, dedicated CPU and offload architectures that are commonly found inside the APs, along with the ODM sourcing model for the Wi-Fi APs and its implications for product offerings.

At a high level, the AP hardware has to perform three types of processing: RF, baseband, MAC/L2/packet. The first two are mainly concerned with the signal modulation/demodulation related tasks; while the third takes care of the 802.11 MAC and L2 protocol implementation and provides the hooks into the packet processing to build various features like QoS, firewalls, bandwidth limiting, etc. The RF and baseband processing are done by the “radio module”. The radio module also performs certain time sensitive MAC/L2 functions such as ACKs, RTS/CTS, time stamping, etc.  The bulk of MAC/L2/packet processing is performed by the “host CPU module”.

SoC (System on Chip) Architecture

In this architecture, there is a single chip which can do the job of the host CPU module and also one radio module. A dual radio AP can be implemented with the SoC architecture with two chips: The main chip includes the host CPU (usually MIPS or ARM core) and one radio module (a/b/g/n), and the second separate chip is for the second radio module (a/b/g/n or ac/b/g/n). The second chip is fixed to the board or provided on a PCI Express (PCIe) card that plugs into the PCI slot on the board that interfaces with the CPU chip. The PCIe form factor for the radio module allows for changing the radio module to more easily launch the updated hardware versions of the AP at a later point in time. In the SoC designs, both the chips usually come from the Wi-Fi chip manufacturers such as Qualcomm (Atheros), Broadcom etc. The radio modules are usually dual band tunable, but locked to one band – usually the SoC radio is locked to 2.4 GHz and the separate radio is locked to 5 GHz.

SoC designs are cost efficient because of the reduced BoM of the board. They also draw less power making them attractive for designing APs meant to operate within the 802.3af power budget.

Sample layout of dual radio SoC design

Sample layout of dual radio SoC design

 

 

 

 

 

 

 

 

 

Dedicated CPU Chip Architecture

Unlike the SoC design, in this architecture, there is a separate host CPU chip. So, a dual radio AP in this architecture requires three chips – one chip for the host CPU and two additional chips (fixed to the board or provided on the PCIe cards) for the two radio modules which can be 11n or 11ac. The CPU chip typically comes from the embedded microprocessor vendor such as Freescale, Cavium etc., while the radio chips come from the Wi-Fi chip vendor. However, this year we should see designs with the host CPU chip also coming from the Wi-Fi chip vendor since some of them possess powerful embedded microprocessor technology from their other product lines.

Due to the three-chip design, these APs usually cost more and can have difficulty operating at full function within the 802.3af power budget. On the flip side, dedicated CPU provides more processing capacity and may also provide hardware assist features for the IPSec encryption, DPI (deep packet inspection), etc.

Sample layout for dual radio dedicated CPU design

Sample layout for dual radio dedicated CPU design

 

 

 

 

 

 

 

Offload Architecture for 802.11ac APs (Second Microprocessor to Assist the Host CPU)

Until and including 11n, the de facto way of implementing bulk of MAC/L2/packet processing was entirely within the host CPU. With increasing speeds in 11ac, new concept of “offload processing” has emerged. In the offload processing concept, there is a second microprocessor (in addition to the host CPU) that is embedded in the radio chip (that is separate from the CPU chip). This second microprocessor handles close to 75% MAC/L2/packet processing tasks on the radio chip itself, leaving only about remaining 25% to be done inside the host CPU.

The offload architecture significantly reduces the load on the host CPU at high speeds as in 802.11ac and thus makes it possible to build full function 11ac AP using the SoC design. With the offload concept and SoC design, a dual radio 11ac AP can use one chip for the host CPU and the b/g/n radio module, and a second chip that includes both the ac radio module and the second microprocessor that handles the offload processing.

The flip side of the offload architecture is higher sophistication required for the software running on the host CPU to perform some special packet handling functions such as raw packet injection. This is because, in the offload design, lot of MAC/L2/packet processing happens in the radio chip itself. So, the host CPU needs to interact with the second microprocessor on the radio chip via API and communication calls to implement special processing tasks.

Original Design Manufacturer (ODM)

ODM vendors (which operate from the manufacturing hubs in East Asia) design AP hardware from the reference designs provided by the chip vendors based on the architectures described above and then offer these hardware platforms to the AP vendors. ODMs often also do some modifications of their own to the reference designs to improve their characteristics, such as fitting better antenna on the board.

The AP vendors choose hardware platforms from the ODM offerings for different products as appropriate for specific market verticals. AP vendors can choose enclosures designed by the ODMs and have then branded with their company logos. That is why, we sometimes see similar looking APs offered by different vendors. AP vendors can also have ODMs design special enclosures for them. In this case, even if the APs may not be similar looking, you can encounter common hardware layouts when you pry them open if they have the same ODM board genesis.

ODMs also assist vendors in platform certifications for the regulatory compliances. Due to easy availability of the validated hardware cores and maturity of the ODM model, AP vendors can now deliver new hardware platforms relatively quickly. Though ODM vendors typically accommodate some platform customization such as changing power amplifier rating or amplifier quality, adding extra Ethernet port(s), providing USB port, adding third radio etc., in general the core hardware differences between the APs will be marginal in the future (barring some highly specialized hardware designs). Couple that with the scenario that Wi-Fi application verticals and deployment use cases expand and become more diverse. Then, the bulk of value needs to come from the software that the AP vendors add on top of these cores in the areas of performance, network services, application enablement, security, manageability and others.

/Images via openwrt wiki

802.11ac , , ,

Education Technology at BETT 2014: Wireless as a Service, 802.11ac and Social Wi-Fi

March 14th, 2014

This is the first post is from Zara Marklew, EMEA channel manager, who is the newest addition to the AirTight team in Europe. Connect with Zara on LinkedIn and Twitter.  Welcome, Zara!

BETT 2014, UK’s learning technology show, has been and gone, but it certainly won’t be forgotten! For those in the educational technology sector, be it primary school teachers all the way to network managers of colleges and large secondary schools, this was THE event and is memorable for new technology and aching feet from over 4 days of the conference.

Wireless as as Service for Education

So what was all the fuss about and why was #BETT2014 trending on the social feeds? There were a few noticeable trends this year noted by attendees and exhibitors alike. Firstly came “XXX as a service”! As educational funding changes, so does the need to adapt and service the new legislation whilst still enabling the educational IT needs in what is a constantly evolving technology landscape.

Cloud Wi-Fi as a service

Cloud Wi-Fi as a service

One that stuck out was North Pallant Cloud and their WISE (Wireless as a Service for education). Using secure cloud managed Wi-Fi provided by the Airtight Networks, they are offering an OPEX model that fits the new financial structures of schools and enables secure Wi-Fi to be deployed. This allows school administrators to deliver all the new applications to their teachers and students whilst getting the assurance that they are securing the network and enabling simple management and deployment for the heads of IT.

Verdict on 802.11ac in Education Is Still Out

There wasn’t as much talk of the new Wi-Fi standard, 802.11ac, from attendees, which was a complete opposite from most Wi-Fi vendors at the show. General opinion was that schools don’t have the money to invest in .ac devices as yet and that in reality, a robust, well designed, deployed .n network will still allowing streaming video to all students. Most of those I spoke to saw 802.11ac as a shrewd plan by Wi-Fi vendors’ marketers to increase sales as opposed to being offered groundbreaking technology.

I’m pretty sure that this opinion will be more divided come BETT 2015 as most schools appear to be first or second generation Wi-Fi users and the realization that security and management of what has now become the primary application delivery network in education, Wi-Fi, is actually pretty important! To that extent, the newer breed of Wi-Fi vendors, those who are cloud managed, were the vendors to see, with the common message that the controller is dead. Its not often you see vendors “buddying up” but that was the common messaging.

Social Wi-Fi: Fit for Education?

The advent of social Wi-Fi seems to have split opinion in terms of its value to the educational sector. The majority of IT people saw no value, whereby a sizable number of principals I spoke with saw it as an invaluable way of promoting the school and being seen as technically “with it.” Competition is fierce amongst schools and social Wi-Fi is a way of engaging with the students, who according to several people “go zombie” as soon as the class is done and lurch around hallways, seemingly sucked into their mobile device. What better way to tell them of events, news and the like then by communicating on the students preferred platforms, Twitter and Facebook. We’ll see over the coming months which way this goes, but it is here to stay in one format or another.

With Wi-Fi now running smart boards, tablets, laptops, desktops, telephony, cameras, entry systems, in fact pretty much everything, 2014 looks like being memorable for those in the business and those looking to utilize the technology.

A gold star and a certificate of merit for all exhibitors and attendees and roll on BETT 2015, my lanyard is ready!

802.11ac, 802.11n, Education

HIMSS 2014 – Big on Wireless

March 6th, 2014

This year’s show was huge. According to the events organizers, there were 1,200 exhibitors and 38,000 healthcare professionals in attendance and there were more nurses, physicians, IT staff and executives in attendance at this year’s HIMSS than ever before.

Connecting Right People to the Right Information in the Right Time

Some of the main areas of focus this year were patient safety, care quality, patient engagement, access to quality care and affordability, but the overarching main theme of the show seemed to be along the lines of enabling the right people to get access to the right information at the right time.

Of special note was the Interoperability Showcase with its one acre of space. This is where over 100 diverse systems demonstrated interoperability for typical patient workflows under different healthcare settings. One of the main objectives of the Interoperability Showcase is to improve overall patient care. This area of the show garnered a lot of attention and interest, and it is understandable that it would as ultimately the quality of care delivered is very much contingent of how well all of the various devices and applications work together. Hopefully the Interop Showcase will be part of future HIMSS conferences.

New Wi-Fi enabled devices and applications at HIMSS14

Wi-Fi helps disabled people walk

Wi-Fi helps disabled people walk

One of the most interesting Wi-Fi capable devices at this year’s HIMSS was a Wi-Fi enabled bionic exoskeleton. The Ekso Bionics unit (pictured above) was featured in the Lockheed Martin booth. This device is for patients with lower extremity paralysis or weakness. It enables patients to stand, walk and it can assist them with their rehabilitation. The unit is equipped with a single Wi-Fi radio. The radio supports two data streams currently. One stream allows engineers to see real-time telemetry data to determine how the unit is performing. The other stream is for the unit’s user, where information such as steps taken, distance traveled, etc., are sent over the air to an application that the user can access later.

WLAN: a necessity in today’s healthcare delivery

HIMSS14 saw all of the enterprise class WLAN equipment manufactures in attendance. And while there have not been any major publicized security breaches in healthcare lately like there have been in retail, Wi-Fi equipment companies were talking about security and protecting patient records again at this year’s show. Enabling BYOD is still front and center. Doctors and other care providers apparently really like using their tablets… So onboarding and mobile device management (MDM) solutions were the topics of many conversions between WLAN equipment manufactures and HIMSS14 attendees.

Another topic of discussion at WLAN booths was real-time location systems (RTLS) enhancements. A couple of WLAN solution providers were discussing 11ac’s impact on high definition video conferencing and moving large diagnostic images around via Wi-Fi. Another popular topic was high availability and uninterrupted care.

Were HIMSS attendees excited about catching 11ac Wave 1?

A number of IT executives in attendance stated that they are looking forward to 11ac solving density and capacity challenges; however, they will likely need to be a bit patient as there were more execs looking forward to 11ac than there were 11ac capable client devices at the show.

While there may have been 11ac capable devices at the show there were not easy to find. Checking in with manufacturers of all types of Wi-Fi enabled medical and communication devices, such as makers of infusion pumps, patient monitoring devices, RTLS systems, voice handsets and video conferencing systems, etc. did not uncover a single 11ac capable device. One vendor of workstations on wheels (WOW), equipped with a high definition video conferencing system, stated that they are in the process of retrofitting their WOW systems with 11ac radios. Their stated main reason for doing so is for scalability as they have discovered that HD video conferencing can easily overwhelm 11n, even at relatively low client densities. (But that is a subject for another blog…)

For more healthcare topics, please see our earlier posts:

802.11ac, Healthcare, WiFi Access ,

Corner Cases

February 26th, 2014

Most Wi-Fi manufacturer’s marketing departments would have you believe that 99% of all deployments are what I’d call “corner cases.” I call B.S. (as usual).

Here are the high-density/high-throughput (HDHT) corner cases that so many manufacturers would have you believe are so prevalent:

  • Large K-12 and University libraries, cafeterias, lecture halls, and auditoriums
  • Stadium or gymnasium bowls
  • Large entertainment venues (e.g. music and theater halls, night clubs)
  • Trade shows
  • Urban hotspots
  • Airports

Combined, these use cases comprise less than 1% of all Wi-Fi installations.  In other words, the opposite of what many marketing departments would have you believe. Let’s look at this from another angle. Here’s a list of use cases that do NOT fall into the category of HDHT, but may have other technical challenges or requirements, yet these same marketing departments want customers to believe they are HDHT environments.

  • K-12 classrooms*
  • Malls
  • Majority of airports

* Note: Some folks believe that one AP per classroom (or even one AP per two classrooms) is a bad idea due to adjacent channel interference (ACI) or co-channel interference (CCI), but that’s a design matter based on a long list of design criteria that can include wall construction materials, AP output power, client device type, client device output power, and MUCH more. I assert that one AP per one (or two) classrooms is a good network design in many K-12 environments, and this usually means less than 35 devices per classroom, worst case. 35-70 devices per AP (2 radios) does not constitute high-density, but may necessitate good L1, L2 QoS, and L7 handling.

Consider all of the common deployments that constitute the majority of WLAN environments:

  • Office environments
  • Warehouses
  • Manufacturing
  • Hospitals
  • Distributed healthcare facilities
  • Cafes
  • Bookstores
  • Hotels

So if HDHT handling isn’t a big deal in 99% of the use cases, what is important? If you ask that question to those same vendor’s marketing departments, they would say Performance! Once again, I call B.S.

After speaking with a variety of network administrators and managers, I’ve found it very difficult to find anyone who can produce statistics showing an AP sustaining more than 10Mbps over the course of an 8-hour business day. Even the peak throughput on the busiest APs aren’t all that high (a couple of hundred Mbps sustained only for a couple of minutes while large files are being transferred). It’s been my experience that busy branch offices, with a single AP serving 50-60 people, is where you find the most sustained WLAN traffic over a single AP.

If 10Mbps is considered “a very busy AP”, and decent 2×2:2 802.11n APs can sustain 200+Mbps of throughput across two radios given the right RF and client environment, then why is everyone talking about performance? I hear vendors bragging about their 3×3:3 11ac APs being capable of 900+Mbps of throughput under optimal conditions. While that kind of throughput is sexy to IT geeks who think that “too much is never enough”, most customers just want it to work. At 200-400 Mbps of throughput for 802.11n APs, why do we care so much about buying premium-priced 11ac APs again?

What do we get out of those 11ac APs anyway? 256QAM is useful only at short range and only for 11ac clients. TxBF is only good at mid-range, and only for thoses client that support it, which is basically none. Rate-over-range is better for uplink transmissions, but if you’re designing for capacity, voice, or RTLS, then this is of no consequence. There may be slightly fewer retransmissions due to better radio quality, but that’s mostly “who cares” also. Bottom line: don’t upgrade your 11n infrastructure for the purpose of speed. If speed (e.g. rate-over-range and raw throughput) is your goal, spend your budget on refreshing your 11ac clients first.

Customers who rush out to buy the latest, greatest, fastest AP end up paying a big price premium for a performance gain that they’ll never, ever, ever, ever use. It’s just silly. They get duped by the marketing message that HDHT handling and ultra high-performance matter in 99% of use cases, when in fact it matters in <1% of the real world use cases. Wi-Fi infrastructure technology is progressing quickly, and the PHY/MAC layers are so far ahead of typical use cases that customers should be focused on correct Layer-2 design and receiving value above Layer-2:

  • Robust, global, cloud management and services option
  • Strong security, compliance and reporting
  • Device tracking / location services
  • Social media integration (i.e. Facebook, LinkedIn, Twitter)
  • Guest and retail analytics
  • Managed services enablement

If you’re going to buy (or upgrade to) an 11ac infrastructure, there’s a very important reason to do it that is unrelated to the speed at which you move frames across the air: intelligence. Some APs don’t have the horsepower to do any significant local processing, and that leaves three options related to infrastructure intelligence:

1) don’t have any
2) send everything to the cloud
3) send everything to a controller

I prefer that APs have enough oomph to get the job done if that’s the optimal place to do the work. There are times when using the cloud makes sense (distributed, analytics), there are times when using the AP makes sense (application visibility/control), and there are times when using a controller makes sense (2008-2009). #CouldntResist

I’ll summarize all of this by asking that prospective customers of Wi-Fi infrastructure remember that they will likely never use even a small fraction of the throughput capabilities of an AP. What will have a significant impact is Wi-Fi system cost, Wi-Fi system architecture, and network design. Don’t get duped by the loud, obnoxious marketing hype around the speed/throughput. Think twice, buy once.

 

802.11ac, 802.11n, Best practices, WLAN planning

Wi-Fi Speeds-n-Feeds Are So Old-school

October 30th, 2013

Speed-n-feeds are not the future of enterprise Wi-Fi. Speed-n-feeds are like my grandmother’s potatoes. I’ll explain.

Ricotta-orange-sweet-potato-cakes

Speed is a given. Speed is a commodity. This is what you talk about when you have nothing else to offer, such as system intelligence. Some companies keep trying to rehash the speeds-n-feeds story like my grandmother used to treat potatoes. First, you’d have baked potatoes. If you didn’t eat all of them, the next night, you’d have mashed potatoes – made from those same potatoes, of course. If there happened to be any left-overs after that, you’d have fried potato cakes the next night. Believe me, the list of how those potatoes could be served was endless until those potatoes were gone. Same potatoes, different day.

With 802.11ac, networks have plenty of speed. In fact, I will bet you a cold beer that far more bandwidth goes unused on a daily, hourly, and per-minute basis than is ever used – even with an 802.11n network. You would be hard-pressed to find an AP in any enterprise environment that exceeds an average of 10% of its throughput capability when averaged over a normal 8-hour work day. If you find even one, please give me a shout with statistical proof. I’m interested to see that data.

There is Wi-Fi bandwidth to spare within most enterprise networks that have 802.11n or 802.11ac technology. The bottleneck isn’t in the radio capability, the architecture, the channel airtime, the AP CPU power, or even the available spectrum. The only exceptions are major interference sources, which can affect channel airtime and available spectrum, but that’s off-topic because you don’t design capacity around interference sources.

If there’s an actual speed bottleneck anywhere, it’s found in the battery life of mobile devices, which causes the use of single spatial stream (1SS) radios. Truth be told, even mobile devices using 1SS are fast, ranging from 65 to 433 Mbps, with about half of this data rate being considered usable throughput.

If you have some kind of crazy high-density scenario, sure, you could potentially run into an airtime bottleneck on specific APs at specific times, but that’s a sub-one percent use case issue in most enterprises, and buying a entire solution around a sub-one percent use case seems silly given the price multiple that you are apt to pay.

Most vendors are still building wave-1 802.11ac APs, and the industry is already talking about forthcoming wave-2 technologies, which are more than a year away, highlighting the marketing frenzy around performance. Look, I’m not against performance, but I’m saying that it’s far from the most important consideration.  In fact, at this point, it’s pretty far down the list.

Well, since there’s plenty of speed, what aspects are more important?

  • Solutions focused on vertical markets
  • MSP (managed service provider) enablement
  • Simplicity and intuitiveness of use and deployment
  • Sales model and process: capex, opex
  • System security, redundancy and stability
  • System maintenance, monitoring and compliance reporting

Those are just a few, but they are enough to point out that all of the hoopla around “just speed” is off-target. Almost any enterprise vendor can now provide reasonable connectivity, and they have the reference customers to prove it. Therefore, winning in the market is no longer just about connectivity, but rather about solving customer problems by providing a focused solution.

You like apples? ;)

/Image via foodieandfellow.com. For an uncommon twist on potato cakes, get the recipe: Orange ricotta sweet potato pancakes 

802.11ac, 802.11n, WiFi Access

Bang for the buck with explicit beam forming in 802.11ac

October 16th, 2013

 

Bang for the buck with explicit beam forming in 802.11ac

802.11ac has brought with it MIMO alphabet soup … spatial streams, space-time streams, explicit beam forming, CSD, MU-MIMO. Alphabet soup triggers questions to which curious mind seeks answers. This post is an attempt to explore some questions surrounding explicit beam forming (E-BF) that is available in Wave-1 of 802.11ac. E-BF is a mechanism to manipulate transmissions on multiple antennas to facilitate SNR boosting at the target client.

How is E-BF related to spatial streams?

E-BF is a technique different from spatial streams. E-BF can be used whenever there are multiple antennas on the transmitter, irrespective of the number of spatial streams used for transmission.

In the transmission using multiple spatial streams, distinct data streams are modulated on signals transmitted from distinct antennas. The signals from different antennas get mixed up in the wireless medium after they are transmitted from the antennas. The receiver uses signal processing techniques to segregate distinct data streams from the mixture. The ability to separate these distinct data streams from the mixture is dependent on the channel conditions between the transmitter and the receiver (to be able to isolate `S’ streams at the receiver, the channel matrix needs to have rank of `S’ or more). There is no channel dependent processing of signal at the transmitter. Receiver performance is channel dependent. Some key points regarding multiple spatial streams (spatial multiplexing) are:

  • To support `S’-stream transmission, both AP and client must have at least`S’ antennas
  • Rich scattering environment (e.g., indoor) is conducive to give high ranked channel matrix
  • There is no need to send channel feedback from the receiver to the transmitter.

In the transmission using E-BF, the spatial streams are pre-processed (and pre-mixed) to match them to the channel characteristics from the transmitter to the receiver and the output of the pre-processor is transmitted from different antennas. Feedback from the receiver about the  channel characteristics is used in pre-processing. For practical implementation called ZF (Zero Forcing) receiver, E-BF causes SNR boosting for the spatial streams at the receiver. Some key points regarding E-BF are:

  • Feasible with multiple antennas on the AP, irrespective of the number of spatial streams
  • Affects SNR of spatial streams at the receiver
  • Requires channel dependent pre-processing of signal at the transmitter
  • Requires feedback on channel characteristics from the receiver to the transmitter
  • Does not require multiple antennas at the receiver.
When is E-BF truly beneficial?

In general, E-BF is truly beneficial when the number of spatial streams in use is less than the number of antennas on the AP. In Wi-Fi, this most commonly happens when the client has less number of antennas than the AP. For example, most smartphones and tablets have only 1 antenna on them.

Stream vs beam tradeoff:

For the example of 3-stream AP and 3-stream client, adding E-BF on top of 3-stream transmission may not give significant benefit. This is because, with E-BF different spatial streams typically experience unequal boost in SNR. The SNR can be significantly boosted for some spatial streams with E-BF. On the flip side, there will usually also be some spatial streams for which the SNR boost is not significant. Or, it could even be degradation of SNR for some spatial streams compared to the case when E-BF is not used. To be precise, the SNR boost for each spatial stream is dictated by the corresponding singular value of the channel matrix and the singular values of the channel matrix range from high to low for practical channels (E-BF is based on technique called Singular Value Decomposition or SVD). Couple this with the fact that practical implementations use the same MCS on all spatial streams. So, this means either using the MCS supportable by the weakest SNR spatial stream for all spatial streams or using high MCS for the strong streams and dropping the weak streams. There is excellent explanation of this tradeoff  in the book by Perahia and Stacey in Chapter 13 (if you are up for reading some math!).

However, if 3-stream AP can support only 1-stream transmission to the client, E-BF can give significant gain. This is commonly the case with smart devices which have only 1 antenna on them. In this case, the single stream will most likely get boosted in SNR and there isn’t another stream to counter the SNR boost.

How much overhead does E-BF feedback cause on wireless bandwidth?

E-BF requires feedback from the receiver to the transmitter about the channel characteristics. In order to trigger this feedback, the transmitter sends sounding packet to the client. Client performs channel measurements on the sounding packet and responds to the AP with the channel feedback. A question that often comes up in E-BF is how much of a wireless link overhead the E-BF feedback report would cause. To answer that question, take a look at this spreadsheet. From this spreadsheet, it appears that the feedback overhead is relatively small (only about 0.1% of airtime), particularly for the case where E-BF is going to be most beneficial, e.g., 3 or 4-antenna AP talking to 1-antenna client.

All factors considered, E-BF appears to provide benefits for smartphones and tablets, which typically have only 1 antenna and hence cannot support multiple spatial streams when connected to the 3 or 4-stream AP. On the other hand, when there are multiple antennas on both sides of the link (such as a 3 or 4-stream laptop connected to the 3 or 4-stream AP), spatial multiplexing without E-BF can be as good as one with E-BF. These are the inferences drawn from MIMO principles and it would be interesting to see if they match up with measurements from the practical Wave-1 environments.

Earlier Posts in 802.11 Network Engineering Series:

 

802.11ac, 802.11n, WiFi Access , , ,

Hunting down the cost factors in the cloud Wi-Fi management plane

October 3rd, 2013

 

Mature cloud Wi-Fi offerings have gone through few phases already. They started with bare-bones device configuration from the cloud console and over the years matured into meaty management plane for complete Wi-Fi access, security and complementary services in the cloud.

CostAlongside these phases of evolution, optimizing the cost of operation of the cloud backend has always been important consideration. It is critical for cloud operators and Managed Service Providers (MSPs). This cost dictates what end users pay for cloud Wi-Fi services and whether attractive pricing models (like AirTight’s Opex-only model) can be viable in the long run. It is also important to the bottom line of the cloud operator/MSP.

Posed with the cost question, one would impulsively say that cost is driven by the capacity in terms of number of APs that can be managed from a staple of compute resource in the cloud. That is an important cost contributor, but not the only one!

 

What do the cost models from cloud operation reveal?

 

We have monitored cloud backend operation costs for past several years. Based on that data, we have built some cost models. These models have led to the discovery of factors that are significant cost contributors. Identifying the cost component is a major step towards reducing it. The cost reduction is often implemented by the combination of technology and process innovations.

 

Draining the cost out of cloud

 

Scalability

This one is no brainer for anyone with head in the cloud. Scalability generally refers to number of APs that can be managed with a unit of compute resource. Higher scalability helps reduce the cost. Enough said.

Provisioning

As the customers of diverse scales (10 APs to 10,000 APs) are deployed in the cloud and at diverse paces, it often results into unused capacity holes in the provisioned compute resources. The capacity holes are undesirable, because the cloud operator or MSP has to pay for them, but they don’t get utilized towards managing end user devices.

The unused capacity problem needs to be solved at two points in time: Initial provisioning and re-provisioning. Clearly, when new customers are deployed, you try to fit them in the right sized capacity buckets. Assuming they love your product, they will then deploy more and start to outgrow their capacity buckets (but you also cannot over-provision, else there will be capacity hole from the beginning). This is the re-provisioning time. At that time, the cloud architecture and processes need to be able to seamlessly migrate customers to bigger capacity buckets.

Personnel

The very reason customers have chosen to go with cloud is because they want plug-n-play experience. As such, the patience level of the cloud customer is often lower than the one choosing the onsite deployment option. This necessitates higher level of plug-n-play experience to avoid support calls.

There are various points in the life cycle that have high tendency to generate support calls.  One point in time is when devices connect to the cloud, or let’s say, not able to connect to cloud. Another critical time is during software upgrades. The issues also often arise during re-provisioning as discussed above when customers are migrated between compute resources. The cost of attending to support calls can be a significant factor if these experiences are not super smooth. Additional complexities arise when APs are sold through channel, but cloud is operated by vendor or another MSP.

The pricing logic behind reducing personnel cost at MSP is as follows. The end user is eliminating the onsite personnel cost by migrating to cloud, and hence paying less on TCO basis. When the experience is not smooth, this cost is transferred to the personnel at the cloud operator or MSP. The cloud operator and MSP cannot make money if they pick up significant part of this cost on their head.

Latent Resources

Certain features such as high availability and disaster recovery have potential to give rise to latent resources. Latent resources are different from capacity holes discussed before. Latent resources are like insurance in that they don’t get utilized most of the time, but they need to be maintained in great shape. Brute force implementation of these redundancy features has been found to be significant cost contributor to cloud operation.

For any cloud services platform, the above pain points are exposed after years of operational experience and teething pain with diverse customer deployments. That is why, it would be appropriate to say that there are two parts to the viable cloud operation – one is the computing technology that enables complete management features and the other is operational maturity. You overlook any one of them and the cloud can become unviable for operator/MSP and customers in the long term.

 

Additional references:

Wireless Field Day 5, AirTight Cloud Architecture video

Aruba Debuts Bare-Bones Cloud WLAN at Network Computing by Lee Badman

Next generation cloud-based Wi-Fi management plane

Controller Wi-Fi, controller-less Wi-Fi, cloud Wi-Fi: What does it mean to the end user?

AirTight is Making Enterprise Wi-Fi Fun Again

Different Shades of Cloud Wi-Fi: Rebranded, Activated, Managed

 

802.11ac, 802.11n, Cloud computing, WLAN networks , , , , , ,

MU-MIMO: How may the path look like from standardization to implementation?

September 26th, 2013

In earlier blog posts on 802.11ac practical considerations, we reviewed 80 MHz channels, 256 QAM and 5 GHz migration. Continuing the 802.11ac insights series, in this post we will look at some practical aspects of MU-MIMO, which is the star attraction of the impending Wave-2 of 802.11ac.

 

MU-MIMO mechanics and 802.11ac standard

 

Illustration of 802.11ac MU-MIMO

Illustration of 802.11ac MU-MIMO

At a high level, MU-MIMO allows AP with multiple antennas to concurrently transmit frames to multiple clients, when each of the multiple clients has lesser antennas than AP. For example, AP with 4 antennas can use 2-stream transmission to a client which has 2 antennas and 1-stream transmission to a client which has 1 antenna, simultaneously. Implicit requirement to attain such concurrent transmission is beamforming, which has to ensure that bits of the first client coherently combine at its location, while bits of the second client do the same at the second client location. It is also important to ensure that bits of the first client form null beam at the location of the second client and vice versa.

 

What does 802.11ac standard offer for implementing MU-MIMO

  •  The standard provides Group ID Management procedure to form client groups. Clients in a given group can be considered together for co-scheduling of transmissions using the MU-MIMO beamforming.
  • To be able to perform peak/null adjustments in MU-MIMO beamforming as described above, the AP needs to have knowledge of Tx to Rx antennas channel matrix to each client in the group. For this, the standard provides well defined process for channel learning wherein AP transmits sounding packet called as NDP (Null Data Packet) to which clients respond with channel feedback frames (this is called explicit feedback mechanism).

 

 What the standard does not specify

 

There is more to MU-MIMO implementation that is outside of the scope of the standard. The true promise of MU-MIMO is also dependent on these additional implementation factors:

  •  AP has to identify clients that can be co-scheduled in a group. How to form these groups is implementation specific. It is dependent on prevalent channel conditions to different clients. AP will have to make smart decisions on group formation.
  • AP has to keep track of channel conditions for clients in different groups by sending regular sounding packets and receiving explicit feedback to the sounding packets from the clients.  Various implementations may differ based on how frequent channel learning is required in them. Frequent learning increases channel overhead, but may result into cleaner (non-interfering) MU-MIMO beams. Slow learning can result in stale information thereby causing inter-beam interference during concurrent transmissions.
  • When channel conditions change, re-grouping of clients is required. Implementations can differ based on re-grouping triggers and method of re-grouping.
  • Implementations can also differ based on how total antennas on AP are used for beamforming within any given group.
  • The performance of MU-MIMO also depends to some degree on the client side implementation. For demodulating the MU-MIMO signal, clients can implement additional techniques such as interference cancellation to eliminate inter-beam interference.
  • The formation of MU-MIMO groups at physical/MAC layer has to also coincide with traffic and QoS requirements of the clients at higher protocol level.

Practical impact

Practical implementation aspects of MU-MIMOThe above considerations are at practical implementation level. Many of them are in the domain of chip design. How well different chip vendors address them could differentiate them from one another in the MU-MIMO era.

They can also impact Wi-Fi chip design paradigm, which traditionally used similar designs for AP and client radios. With MU-MIMO, there will be bulk of tasks that will be performed at AP, resulting in significant design differences between AP side chipset and client side chipset.

Due to all the nuances of implementation and sensitivity to channel conditions, comparing different MU-MIMO implementations in practical network is difficult task. Notwithstanding, I can imagine MU-MIMO becoming table stake in RFPs after Wave-2 arrives, to which everyone will answer “yes” without heed to the exact implementation details. :-)

One radical thought

Given the cost and complexity of chip level tasks required in MU-MIMO, could there be some chip family which may just use all antennas on the AP to form beam to single client at a time. That is, sequential SU-MIMO, instead of parallel MU-MIMO. What will be pros and cons? Will MU-MIMO be only incrementally or significantly better than sequential SU-MIMO? Time will tell.

Devil is in Detail!

 

Addition Information:

 

802.11ac, Best practices, WiFi Access, WLAN networks , ,

Get Soaked in the Future of Wi-Fi

September 5th, 2013

|

AirTight Networks is armed with Wi-Fi of the future, and blasting the message out through social media.

 |

Have you ever noticed that there always seems to be a disconnect in the Wi-Fi industry whereby vendors build and sell their products based on hardware capabilities, tech specs, and geeky feature sets while customers ultimately evaluate products based on how the solution fits with their organizational objectives? That’s a problem.

|

The Wi-Fi market is on the cusp of a second-wind of tremendous growth that will be driven by focusing product solutions on the tailored needs of customers in every vertical market.  However, this is a departure from the status-quo as historically the Wi-Fi market has grown by pushing products (not solutions) based on the latest hardware enhancements and improvements in speed that have come with each iteration of the 802.11 standard. But that model is breaking down as the technology matures, and hardware differentiation alone is very minimal. And customers are demanding more tailored solutions as their own markets evolve into a mobile-enabled workforce and customer experience.

|WiFuture tweet

|

What’s exciting is that AirTight is already delivering Wi-Fi of the Future (#WiFuture if you’re following along on Twitter). We provide tailored solutions that include social Wi-Fi integration that enable retailers to engage consumers and provide enhanced customer service, presence and location analytics to understand and adapt to customer behavior in-store, and the most robust wireless security solution on the market to secure data well beyond basic PCI compliance requirements. And that’s only the beginning.

 |

AirTight is building solutions that enable the Wi-Fi of the future through:

|

A Software-Centric Approach – leveraging the rich data analytics available through an intelligent access network, and software defined radios that enable flexibility of hardware use for client access, security monitoring, and performance analysis.

|

|Intuitive User Experiencemaking Wi-Fi simpler to deploy and troubleshoot so the network isn’t broken or under-performing.

|

Operational Expense Model – enabling customers to acquire the latest solutions without breaking the budget.

|

Mature Cloud – that is truly elastic with both public cloud and private cloud options, enabling easy expansion to meet growing network demands without causing unnecessary retooling or plumbing of the existing network. Mature cloud offering also enables the coming wave of Managed Service Providers (MSPs) who will serve the mid-market.

|

A Culture of Listening – to customers, partners, and industry experts in various industries so that we understand the business drivers for technology solutions and ensure we build products that deliver on those needs.

|

|

@AirTight: Soaking the Industry in the Future of Wi-Fi!

|

WiFuture SuperSoaker|

We are also building an incredible team of industry experts to blast this vision to the market through social media.  AirTight is armed with Super Social experts, kind of like those old Super Soaker water gun blasters we all loved from a few decades ago (has it been that long already!). The tank is full of energy and innovation, and the social media team at AirTight is at the trigger!

|

So, are you ready to blast away your Wi-Fi woes? Don’t get stuck on the wrong side, soaked and wet in yesterday’s technology.

 |

||

|

802.11ac, 802.11n, WiFi Access, Wireless security, WLAN networks ,

11 Commandments of Wi-Fi Decision Making

September 4th, 2013

|

Are you considering new Wi-Fi deployment or upgrade of legacy system? Then you should be prepared to navigate the maze of multiple decision factors given that Wi-Fi bake-offs increasingly require multi-faceted evaluation.

|

Follow these 11 “C”ommandments to navigate the Wi-Fi decision tree:

|

  1. Cost

  2. Wi-Fi CommandmentsComplexity

  3. Coverage

  4. Capacity

  5. Capabilities

  6. Channels

  7. Clients

  8. Cloud

  9. Controller

  10. 11aC, and last but not least …

  11. seCurity!

 

|hemant C tweet

 

1) Cost:

|

Cost consideration entails both “price and pricing” nuances. Price is the size of the dent to the budget and everyone likes it to be as small as possible. Pricing is the manner in which that dent is made – painful or less painful (I don’t think it can ever be painless!). One aspect of pricing is the CAPEX/OPEX angle. Other aspects such as licensing, front loaded versus back loaded, maintenance fees etc. have been around for a long time, so I won’t drill into details of those other than to say that they exist and need to be considered. Enough said on cost.

|

2) Complexity:

|

Complexity consideration spans deployment, configuration and ongoing maintenance. One pitfall to avoid is to “like complexity in the lab and then repent it in the production”. Too many knobs to turn and tune, excessive configuration flexibility and exotic features are some of the things that can add to complexity. That said, complexity considerations cannot swing to the point of being simplistic. Rather, the balanced approach is to look for solutions that have mastered complexity to extract simplicity to meet your needs (borrowing from Don Norman’s terminology here).

 |

3) Coverage:

|

When you hear terms like neg 55, neg 60, neg 65, you know people are reconciling coverage expectations to the number of access points. There’s no explanation needed for how important the coverage is for your wireless network, but the important factor is that the coverage determines the number of access points needed to cover the physical area. At the planning stage, RF predictive planning comes in handy to estimate the coverage BOM (a site survey can complement it for sample areas during the evaluation stage).

|

4) Capacity:

|

While coverage determines how far, capacity determines how many or how much. Capacity determines how small or large cells can be. Using practical models for Wi-Fi usage, capacity objectives can be set and network design can be evaluated against these factors. Capacity also determines the number of access points needed to provide the desired capacity in the physical area. RF predictive planning tools can be invaluable during the evaluation phase for capacity estimation.

|

5) Capabilities:

|

By capabilities, I mean feature set. This is one of the most important aspects because this is where you ask the question: “Will the Wi-Fi serve the needs of the business?” This is very industry specific. Some features are extremely critical for one vertical, but won’t even be noticed in others. So, it’s important to identify both the features you care about and also those for which you don’t.  Once identified, you move on to thoroughly evaluate the ones you care about.

|

6) Channels:

|

One aspect of channels is making decision on how the RF network will be provisioned along the lines of 2.4 GHz and 5 GHz operation. There are advantages to 5 GHz operation, but 2.4 GHz is not EOL yet. How applications are split between the two bands determines the number and type of radios required in the design. Tools and techniques that are needed to plan, monitor and adapt to the dynamic RF environment are also an important consideration.

|

7) Clients:

|

Much of what is achievable in Wi-Fi network depends upon the capabilities of the client devices that will access the wireless network. One set of considerations is mainly around the radio capabilities of clients such as 2.4 GHz/5 GHz operation, number of radio streams, implementation of newer standards in clients etc. Another set of considerations revolves around the applications they run and the traffic profile these applications generate. Yet another set of considerations centers around the level of mobility of the clients. BYOD is another consideration that has become important in the the clients arena.

|

8) Cloud or 9) Controller:

|

Today, we see pure cloud architecture, pure controller architecture and also architectures confused between the two concepts. While vendors and experts spar over which is the right architecture for today’s and tomorrow’s Wi-Fi, evaluators should focus on comparing them based on their derived value. It is also important to understand what cloud and controller concepts actually mean from the data, control and management plane perspective. Cloud and controller are distinct ways of organizing overall Wi-Fi solution functionality.

|

10) 11aC:

|

Making judicious decisions on “what to deploy today or whether to upgrade now” is a tricky one. There are many views around it. One reason is because of how the features of 802.11ac are split between Wave-1 and Wave-2. It is also important to note that immediate 802.11ac benefits are application and vertical specific. Several practical network engineering considerations exist beyond the casual description of the new 802.11ac speeds that are often marketed. So, listen to vendors, listen to business needs, listen to experts, analyze yourself, and in the end, do what is the best for your environment and situation. Speed is nice IF it can be leveraged in practice!

 |

11) SeCurity:

|

Any information system sans security is worse than worthless – especially today. That said, level of security required by the wireless environment depends on factors such as the value of information at risk, compliance requirements and enterprise security policies. Desired security level determines the right mix of data inline security (encryption, authentication) and security from unmanaged devices (WIPS). Talking of WIPS, the biggest red flags to watch for are trigger happy solutions that generate false alarms, boast long list of ”popcorn” alerts and require excessive manual involvement in the security process.

letter spoonfull|

My hope is that these “C”ommandments will help serve as guidelines in your Wi-Fi decision making process. You can follow them in any order you like to ensure holistic evaluation of options before you. Every vendor, big or small, has sweet spots on some dimensions and not so sweet spots on others. So, despite what they tell you, nobody scores all A’s on all C’s. Hence one has to work on the evaluation criteria until the palatable scorecard is achieved consistent with requirements and budget.

 |

Additional References:

 

802.11ac, 802.11n, Best practices, WiFi Access, WLAN networks, WLAN planning , , , , ,