Player FM - Internet Radio Done Right
12 subscribers
Checked 3+ y ago
Đã thêm cách đây tám năm
Nội dung được cung cấp bởi SNIA Technical Council. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được SNIA Technical Council hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Player FM - Ứng dụng Podcast
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Chuyển sang chế độ ngoại tuyến với ứng dụng Player FM !
Podcast đáng để nghe
TÀI TRỢ BỞI
A
Advances in Care


1 Advancing Cardiology and Heart Surgery Through a History of Collaboration 20:13
20:13
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích20:13
On this episode of Advances in Care , host Erin Welsh and Dr. Craig Smith, Chair of the Department of Surgery and Surgeon-in-Chief at NewYork-Presbyterian and Columbia discuss the highlights of Dr. Smith’s 40+ year career as a cardiac surgeon and how the culture of Columbia has been a catalyst for innovation in cardiac care. Dr. Smith describes the excitement of helping to pioneer the institution’s heart transplant program in the 1980s, when it was just one of only three hospitals in the country practicing heart transplantation. Dr. Smith also explains how a unique collaboration with Columbia’s cardiology team led to the first of several groundbreaking trials, called PARTNER (Placement of AoRTic TraNscatheteR Valve), which paved the way for a monumental treatment for aortic stenosis — the most common heart valve disease that is lethal if left untreated. During the trial, Dr. Smith worked closely with Dr. Martin B. Leon, Professor of Medicine at Columbia University Irving Medical Center and Chief Innovation Officer and the Director of the Cardiovascular Data Science Center for the Division of Cardiology. Their findings elevated TAVR, or transcatheter aortic valve replacement, to eventually become the gold-standard for aortic stenosis patients at all levels of illness severity and surgical risk. Today, an experienced team of specialists at Columbia treat TAVR patients with a combination of advancements including advanced replacement valve materials, three-dimensional and ECG imaging, and a personalized approach to cardiac care. Finally, Dr. Smith shares his thoughts on new frontiers of cardiac surgery, like the challenge of repairing the mitral and tricuspid valves, and the promising application of robotic surgery for complex, high-risk operations. He reflects on life after he retires from operating, and shares his observations of how NewYork-Presbyterian and Columbia have evolved in the decades since he began his residency. For more information visit nyp.org/Advances…
Storage Developer Conference
Đánh dấu tất cả (chưa) nghe ...
Manage series 1393477
Nội dung được cung cấp bởi SNIA Technical Council. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được SNIA Technical Council hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Storage developer Podcast, created by developers for developers.
…
continue reading
146 tập
Đánh dấu tất cả (chưa) nghe ...
Manage series 1393477
Nội dung được cung cấp bởi SNIA Technical Council. Tất cả nội dung podcast bao gồm các tập, đồ họa và mô tả podcast đều được SNIA Technical Council hoặc đối tác nền tảng podcast của họ tải lên và cung cấp trực tiếp. Nếu bạn cho rằng ai đó đang sử dụng tác phẩm có bản quyền của bạn mà không có sự cho phép của bạn, bạn có thể làm theo quy trình được nêu ở đây https://vi.player.fm/legal.
Storage developer Podcast, created by developers for developers.
…
continue reading
146 tập
Alle afleveringen
×S
Storage Developer Conference

1 #146: Understanding Compute Express Link 41:37
41:37
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích41:37
Compute Express Link™ (CXL™) is an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. Datacenter architectures are evolving to support the workloads of emerging applications in Artificial Intelligence and Machine Learning that require a high-speed, low latency, cache-coherent interconnect. The CXL specification delivers breakthrough performance, while leveraging PCI Express® technology to support rapid adoption. It addresses resource sharing and cache coherency to improve performance, reduce software stack complexity, and lower overall systems costs, allowing users to focus on target workloads. Attendees will learn how CXL technology maintains a unified, coherent memory space between the CPU (host processor) and CXL devices allowing the device to expose its memory as coherent in the platform and allowing the device to directly cache coherent memory. This allows both the CPU and device to share resources for higher performance and reduced software stack complexity. In CXL, the CPU host is primarily responsible for coherency management abstracting peer device caches and CPU caches. The resulting simplified coherence model reduces the device cost, complexity and overhead traditionally associated with coherency across an I/O link. Learning Objectives: Learn how CXL supports dynamic multiplexing between a rich set of protocols that includes I/O (CLX.io, based on PCIe®), caching (CXL.cache), and memory (CXL.mem) semantics.,Understand how CXL maintains a unified, coherent memory space between the CPU and any memory on the attached CXL device,Gain insight into the features introduced in the CXL specification…
S
Storage Developer Conference

1 #145: The Future of Accessing Files Remotely from Linux: SMB3.1.1 Client Status Update 45:14
45:14
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích45:14
Improvements to the SMB3.1.1 client on Linux have continued at a rapid pace over the past year. These allow Linux to better access Samba server, as well as the Cloud (Azure), NAS appliances, Windows systems, Macs and an ever increasing number of embedded Linux devices including those using the new smb3 kernel server Linux (ksmbd). The SMB3.1.1 client for Linux (cifs.ko) continues to be one of the most actively developed file systems on Linux and these improvements have made it possible to run additional workloads remotely. The exciting recent addition of the new kernel server also allows more rapid development and testing of optimizations for Linux. Over the past year, performance has dramatically improved with features like multichannel (allowing better parallelization of i/o and also utilization of multiple network devices simultaneously), with much faster encryption and signing, with better use of compounding and improved support for RDMA. Security has improved and alternative security models are now possible with the addition of modefromsid and idsfromsid and also better integration with Kerberos security tooling. New features have been added include the ability to swap over SMB3 and boot over SMB3. Quality continues to improve with more work on 'xfstests' and test automation - tooling (cifs-utils) continue to be extended to make use of SMB3.1.1 mounts easier. This presentation will describe and demonstrate the progress that has been made over the past year in the Linux kernel client in accessing servers using the SMB3.1.1 family of protocols. In addition recommendations on common configuration choices, and troubleshooting techniques will be discussed. Learning Objectives: What new features are now possible when accessing servers from Linux?,What new tools have been added to make it easier to use SMB3.1.1 mounts from Linux?,What new features are nearing completion that you should you expect to see in the near future?,How can I configure the security settings I need to use SMB3.1.1 for my workload?,How can I configure the client for optimal performance for my workload?…
S
Storage Developer Conference

The NVMe Key Value (NVMe-KV) Command Set has been standardized as one of the new I/O Command Sets that NVMe Supports. Additionally, SNIA has standardized a Key Value API that works with the NVMe Key Value allows access to data on a storage device using a key rather than a block address. The NVMe-KV Command Set provides the key to store a corresponding value on non-volatile media, then retrieves that value from the media by specifying the corresponding key. Key Value allows users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks. This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards, and present open source work that is available to take advantage of Key Value storage. Learning Objectives: Present the standardization of SNIA KV API,Present the standardization of NVMe Key Value Command Set,Present the benefits of Key Valeu in computational storage,Present open source work on Key Value Storage.…
S
Storage Developer Conference

1 #143: Deep Compression at Inline Speed for All-Flash Array 35:18
35:18
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích35:18
The rapid improvement of overall $/Gbyte has driven the high performance All-Flash Array to be increasingly adopted in both enterprises and cloud datacenters. Besides the raw NAND density scaling with continued semiconductor process improvement, data reduction techniques have and will play a crucial role in further reducing the overall effective cost of All-Flash Array. One of the key data reduction techniques is compression. Compression can be performed both inline and offline. In fact, the best All-Flash Arrays often do both: fast inline compression at a lower compression ratio, and slower, opportunistic offline deep compression at significantly higher compression ratio. However, with the rapid growth of both capacity and sustained throughput due to the consolidation of workloads on a shared All-Flash Array platform, a growing percentage of the data never gets the opportunity for deep compression. There is a deceptively simple solution: Inline Deep Compression with the additional benefits of reduced flash wear and networking load. The challenge, however, is the prohibitive amount of CPU cycles required. Deep compression often requires 10x or more CPU cycles than typical fast inline compression. Even worse, the challenge will continue to grow: CPU performance scaling has slowed down significantly (breakdown of Dennard scaling), but the performance of All-Flash Array has been growing at a far greater pace. In this talk, I will explain how we can meet this challenge with a domain-specific hardware design. The hardware platform is a FPGA-based PCIe card that is programmable. It can sustain 5+Gbyte/s of deep compression throughput with low latency for even small data block sizes (TByte/s BW and less than 10ns of latency) and the almost unlimited parallelism available on a modern mid-range FPGA device. The hardware compression algorithm is trained with a vast amount of data available to our systems. Our benchmarks show it can match or outperform some of the best software compressors available in the market without taxing the CPU. Learning Objectives: Hardware Architecture for Inline Deep Compression,Design of Hardware Deep Compression Engine,Inline and offline compression of All-Flash Array.…
S
Storage Developer Conference

1 #142: ZNS: Enabling in-place Updates and Transparent High Queue-Depths 45:23
45:23
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích45:23
Zoned Namespaces represent the first step towards the standardization of Open-Channel SSD concepts in NVMe. Specifically, ZNS brings the ability to implement data placement policies in the host, thus providing a mechanism to lower the write-amplification factor (WAF), (ii) lower NAND over-provisioning, and (iii) tighten tail latencies. Initial ZNS architectures envisioned large zones targeting archival use cases. This motivated the creation of the "Append Command” - a specialization of nameless writes that allows to increase the device I/O queue depth over the initial limitation imposed by the zone write pointer. While this is an elegant solution, backed by academic research, the changes required on file systems and applications is making adoption more difficult. As an alternative, we have proposed exposing a per-zone random write window that allows out-of-order writes around the existing write pointer. This solution brings two benefits over the “Append Command”: First, it allows I/Os to arrive out-of-order without any host software changes. Second, it allows in-place updates within the window, which enables existing log-structured file systems and applications to retain their metadata model without incurring a WAF penalty. In this talk, we will cover in detail the concept of the random write window, the use cases it addresses, and the changes we have done in the Linux stack to support it. Learning Objectives: Learn about general ZNS architecture and ecosystem,Learn about the use cases supported in ZNS and the design decisions in the current specification with regards to in-place updates and multiple inflight I/Os,Learn about new features being brought to NVMe to support in-place updates and transparent hight queue depths.…
S
Storage Developer Conference

1 #141: Unlocking the New Performance and QoS Capabilities of the Software-Enabled Flash API 51:12
51:12
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích51:12
The Software-Enabled Flash API gives unprecedented control to application architects and developers to redefine the way they use flash for their hyperscale applications, by fundamentally redefining the relationship between the host and solid-state storage. Dive deep into new Software-Enabled Flash concepts such as virtual devices, Quality of Service (QoS) domains, Weighted Fair Queueing (WFQ), Nameless Writes and Copies, and controller offload mechanisms. This talk by KIOXIA (formerly Toshiba Memory) will include real-world examples using the new API to define QoS and latency guarantees, workload isolation, minimize write amplification by application-driven data placement, and achieve higher performance with customized flash translation layers (FTL). Learning Objectives: Provide an in-depth dive into using the Software Enabled Flash API,Map application workloads to Software Enabled Flash structures,Understand how to implement QoS requirements using the API.…
S
Storage Developer Conference

The NVM Express workgroup is introducing new features frequently, and the Linux kernel supporting these devices evolves with it. These ever moving targets create challenges when developing tools when new interfaces are created, or older ones change. This talk will provide information on some of these recent features and enhancements, and introduce the open source 'libnvme' project which implements an open source library available in public git repositories that provides access to all NVM Express features with convenient abstractions to the kernel interfaces interacting with your devices. The session will demonstrate integrating the library with other programs, and also provide an opportunity for the audience to share what additional features they would like to see out of this common library in the future. Learning Objectives: Explain protocol and host operating system interaction complexities,Introduce libnvme and how it manages those relationships,Demonstrate integration with applications.…
S
Storage Developer Conference

1 #139: Use Cases for NVMe-oF for Deep Learning Workloads and HCI Pooling 58:29
58:29
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích58:29
The efficiency, performance and choice in NVMe-oF is enabling some very unique and interesting use cases – from AI/ML to Hyperconverged Infrastructures. Artificial Intelligence workloads process massive amounts of data from structured and from unstructured sources. Today most deep learning architectures rely on local NVMe to serve up tagged and untagged datasets into map-reduce systems and neural networks for correlation. NVMe-oF for Deep Learning infrastructures enables a shared data model to ML/DL pipelines without sacrificing overall performance and training times. NVMe-oF is also enabling HCI deployment to scale without adding more compute, enabling end customers to reduce dark flash and reduce cost. The talk explores these and several innovative technologies driving the next storage connectivity revolution. Learning Objectives: Storage architectures for Deep Learning Workloads,Extending the reach of HCI platforms using NVMe-oF,Ethernet Bunch of Flash architectures.…
S
Storage Developer Conference

NVMe is the fastest growing storage technology of the last decade and has succeeded in unifying client, hyperscale and enterprise applications into a common storage framework. NVMe has evolved from a being a disruptive technology to becoming a core element in storage architectures. In this session, we will talk about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. We will provide an overview of the latest NVMe technologies, summarize the NVMe standards roadmap and describe the latest NVMe standardization initiatives. NVMe technology will present a number of areas of innovation that preserve our simple, fast, scalable paradigm while extending the broad appeal of NVMe architecture. These continued innovations will ready the NVMe technology ecosystem for yet another period of growth and expansion. Learning Objectives: Learn about the NVMe transition to a merged base specification inclusive of both NVMe and NVMe-oF architectures. Receive a summary of the NVMe standards roadmap,Understand the latest NVMe standardization initiatives.…
S
Storage Developer Conference

1 #137: Caching on PMEM: an Iterative Approach 43:29
43:29
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích43:29
With PMEM boasting a much higher density and DRAM-like performance, applying it to in-memory caching such as memcached seems like an obvious thing to try. Nonetheless, there are questions when it comes to new technology. Would it work for our use cases, in our environment? How much effort does it take to find out if it works? How do we capture the most value with reasonable investment of resource? How can we continue to find a path forward as we make discoveries? At Twitter, we took an iterative approach to explore cache on PMEM. With significant early help from Intel, we started with simple tests in memory mode in a lab environment, and moved on to app_direct mode with modifications to Pelikan (pelikan.io), a modular open-source cache backend developed by Twitter. With positive results from the lab runs, we moved the evaluation to platforms that more closely represent Twitter’s production environment, and uncovered interesting differences. With better understanding of how Twitter’s cache workload behaves on the new hardware, and our insight into Twitter’s cache workload in general, we are proposing a new cache storage design called Segcache that, among other things, offers flexibility with storage media and in particular is designed with PMEM in mind. As a result, it achieves superior performance and effectiveness when running either on DRAM or PMEM. The whole exploration was made easier by the modular architecture of Pelikan, and we added a benchmark framework to support the evaluation of storage modules in isolation, which also greatly facilitated our exploration and development. Learning Objectives: Demonstrate the feasibility of using PMEM for caching and meeting production requirements. Provide a case study on how software companies can approach and adopt new technology like PMEM iteratively. Provide observations and suggestions on how to promote a more integral hardware/software design cycle.…
S
Storage Developer Conference

Software-based memory-to-memory data movement is common, but takes valuable cycles away from application performance. At the same time, offload DMA engines are vendor-specific and may lack capabilities around virtualization and user-space access. This talk will focus on how SDXI(Smart Data Acceleration Interface), a newly formed SNIA TWG is working to bring an extensible, virtualizable, forward-compatible, memory to memory data movement and acceleration interface specification. As new memory technologies get adopted and memory fabrics expand the use of tiered memory, data mover acceleration and its uses will increase. This TWG will encourage adoption and extensions to this data mover interface. Learning Objectives: A new proposed standard for a memory to memory data movement interface,A new TWG to develop this standard,Usecases where this will apply to evolving storage architecture with memory pooling and persistent memory…
S
Storage Developer Conference

1 #135: SmartNICs and SmartSSDs, the Future of Smart Acceleration 50:50
50:50
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích50:50
Since the advent of the Smart Phone over a decade ago, we've seen several new "Smart" technologies, but few have had a significant impact on the data center until now. SmartNICs and SmartSSDs will change the landscape of the data center, but what comes next? This talk will summarize the state of the SmartNIC market by classifying and discussing the technologies behind the leading products in the space. Then it will dive into the emerging technology of SmartSSDs and how they will change the face of storage and solutions. Finally, we'll dive headfirst into the impact of PCIe 5 and Compute Express Link (CXL) on the future of Smart Acceleration on solution delivery. Learning Objectives: Understand the current state of the SmartNIC market & leading products.,Introduce the concept of SmartSSDs and two products available today.,Discuss the future of Device to Device (D2D) communications using PCIe, CXL/CCIX.,Lay out a vision for where composable solutions leveraging multiple devices on a PCIe buss communicating directly.…
S
Storage Developer Conference

1 #134: Best Practices for OpenZFS L2ARC in the Era of NVMe 53:47
53:47
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích53:47
The ZFS L2ARC is now more than 10 years old. Over that time, a lot of secret incantations and tribal knowledge have been created by users, testers, developers, and the odd sales or marketing person. That collection of community wisdom informs the use and/or tuning of ZFS L2ARC for certain IO profiles, dataset sizes, server class, share protocols, and device types. In this talk, we will review a case study in which we tested a few of these L2ARC myths on an NVMe-capable OpenZFS storage appliance. Can high-speed NVMe flash devices keep L2ARC relevant in the face of ever-increasing memory capacity for ARC (primary cache) and all-flash storage pools? Learning Objectives: 1) Overview of ZFS L2ARC design goals and high level implementation details that pertain to our findings; 2) Performance characteristics of L2ARC during warming and when warmed, plus any tradeoffs or pitfalls with L2ARC in these states; 3) How to leverage NVMe as L2ARC devices to improve performance in a few storage use cases.…
S
Storage Developer Conference

1 #133: NVMe based Video and Storage solutions for Edged based Computational Storage 40:58
40:58
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích40:58
5G Wireless technology will bring vastly superior data rates to the edge of the network. However, with this increase in bandwidth will come applications that significantly increase overall network throughput. Video applications will likely explode as end users have large amounts of data bandwidth to operate. Video will not only require advanced compression but will require large amounts of data storage. Combining advanced compression technologies with storage will allow a high density of storage and compression in a small amount of rack space with little power, ideal for placement at the edge of the network. NVMe based module provides the opportunity to use computational storage elements to enable edge compute and video compression. This presentation will provide technical details and various options to combine video and storage on an NVMe interface. Further, it will explore how this NVMe device can be virtualized for both storage and video in an edge compute environment. Learning Objectives: 1) Understand how NVMe can be used for both video and storage; 2) Understand how computational storage can be virtualized using NVMe; 3) Understand why combinational element modules such as Video Storage will become important after deployment of 5G networks.…
S
Storage Developer Conference

1 #132: Emerging Scalable Storage Management Functionality 38:53
38:53
Nghe Sau
Nghe Sau
Danh sách
Thích
Đã thích38:53
By now, you have a good understanding of SNIA Swordfish™ and how it extends the DMTF Redfish® specification to manage storage equipment and services. Attend this presentation to learn what’s new and how the specification has evolved since last year. The speaker will share the latest updates ranging from details of features and profiles to new vendor-requested functionality that’s supporting the specification from direct-attached to NVMe. You won’t want to miss this opportunity to be brought up-to-speed. Learning Objectives: 1) Educate the audience on what’s new with Swordfish; 2) Describe features and profiles and why they are useful; 3) Provide an overview of vendor-requested Swordfish functionality.…
Chào mừng bạn đến với Player FM!
Player FM đang quét trang web để tìm các podcast chất lượng cao cho bạn thưởng thức ngay bây giờ. Đây là ứng dụng podcast tốt nhất và hoạt động trên Android, iPhone và web. Đăng ký để đồng bộ các theo dõi trên tất cả thiết bị.