Solarflare, a pioneer in the development of neural-class networks, has announced networking solutions for cloud service providers which are designed to eliminate the performance penalty of operating system overhead, the company said.
The essence of the solution is a microkernel architecture which keeps the busy kernel as small as possible by onloading networking services to lightning-fast user space of main memory without modification to applications. This week at NGINX Conf 2018, Solarflare is demonstrating how NGINX Plus, equipped with Solarflare´s OnloadÂ® kernel bypass software running in user space and XtremeScaleÂ® NICs, support four times more user requests for web content.
The Solarflare solutions for microkernel architectures allow Internet Service Providers (ISPs) to transform their software load balancers into revenue-producing infrastructures. ISPs support thousands of high-traffic websites, each serving up to millions of concurrent requests from users. With Onload user space networking, IT organizations can now deploy more efficient software load balancers, each supporting far more requests, and use the savings to invest in revenue-producing app and web servers.
Solarflare is pioneering server connectivity for neural-class networks. From silicon to firmware to software, Solarflare provides a comprehensive, integrated set of technologies for distributed, ultra-scale, software-defined datacenters.