Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-gnu-community-announces-parallel-gcc-for-parallelism-in-real-world-compilers
Savia Lobo
16 Sep 2019
5 min read
Save for later

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Savia Lobo
16 Sep 2019
5 min read
Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. GCC is an optimizer compiler that automatically optimizes code when compiling. GCC optimization phase involves three steps: Inter Procedural Analysis (IPA): This builds a callgraph and uses it to decide how to perform optimizations. GIMPLE Intra Procedural Optimizations: This performs several hardware-independent optimizations inside the function. RTL Intra Procedural Optimizations: This performs several hardware-dependent optimizations inside the function. As IPA collects information and decides how to optimize all functions, it then sends a function to the GIMPLE optimizer, which then sends the function to the RTL optimizer, and the final code is generated. This process repeats for every function in the code. Also Read: Oracle introduces patch series to add eBPF support for GCC Why a Parallel GCC? The team designed the parallel architecture to increase parallelism and reduce overhead. While IPA finishes its analysis, a number of threads equal to the number of logical processors are spawned to avoid scheduling overhead. Further, one of those thread inserts all analyzed functions into a threadsafe producer-consumer queue, which all threads are responsible to consume. Once a thread has finished processing one function, it queries for the next function available in the queue, until it finds an EMPTY token. When it happens, the thread should finalize as there are no more functions to be processed. This architecture is used to parallelize per-function GIMPLE Intra Process Optimizations and can be easily extended to also support RTL Intra Process Optimizations. This, however, does not cover IPA passes nor the per-language Front End analysis. Code refactoring to achieve Parallel GCC The team refactored several parts of the GCC middle-end code in the Parallel GCC project. The team says there are still many places where code refactoring is necessary for this project to succeed. “The original code required a single function to be optimized and outputted from GIMPLE to RTL without any possible change of what function is being compiled,” the researchers wrote in their official blog. Several structures in GCC were made per-thread or threadsafe, either being replicated by using the C11 thread notation, by allocating the data structure in the thread stack, or simply inserting locks. “One of the most tedious parts of the job was detecting making several global variables threadsafe, and they were the cause of most crashes in this project. Tools made for detecting data-races, such as Helgrind and DRD, were useful in the beginning but then showed its limitations as the project advanced. Several race conditions had a small window and did not happen when the compiler ran inside these tools. Therefore there is a need for better tools to help to find global variables or race conditions,” the blog mentions. Performance results The team compiled the file gimple-match.c, the biggest file in the GCC project. This file has more than 100,000 lines of code, with around 1700 functions, and almost no loops inside these functions. The computer used in this Benchmark had an Intel(R) Core(TM) i5-8250U CPU, with 8Gb of RAM. Therefore, this computer had a CPU with 4 cores with Hyperthreading, resulting in 8 virtual cores. The following are the results before and after Intra Procedural GIMPLE parallelization. Source: gcc.gnu.org The figure shows our results before and after Intra Procedural GIMPLE parallelization. In this figure, we can observe that the time elapsed, dropped from 7 seconds to around 4 seconds with 2 threads and around 3 seconds with 4 threads, resulting in a speedup of 1.72x and 2.52x, respectively. Here we can also see that using Hyperthreading did not impact the result. This result was used to estimate the improvement in RTL parallelization. Source: gcc.gnu.org The above results show when compared with the total compilation time, there is a small improvement of 10% when compiling the file. Source: gcc.gnu.org In this figure using the same approach as in the previous graph, users can estimate a speedup of 1.61x in GCC when it gets parallelized by using the speedup information obtained in GIMPLE. The team has suggested certain To-Dos for users wanting to implement parallel GCC: Find and fix all race conditions in GIMPLE. There are still random crashes when a code is compiled using the parallel option. Make this GCC compile itself. Make this GCC pass all tests in the testsuite. Add support to a multithread environment to Garbage Collector. Parallelize RTL part. This will improve our current results, as indicated in Results chapter. Parallelize IPA part. This can also improve the time during LTO compilations. Refactor all occurrences of thread by allocating these variables as soon as threads are started, or at a pass execution. GCC project members say that this project is under development and still has several bugs. A user on Hacker News writes, “I look forward to this. One that will be important for reproducible builds is having tests for non-determinism. Having nondeterministic code gen in a compiler is a source of frustration and despair and sucks to debug.” To know about the Parallel GCC in detail, read the official document. Other interesting news in programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE
Read more
  • 0
  • 0
  • 16935

article-image-microsoft-announces-its-support-for-bringing-exfat-in-the-linux-kernel-open-sources-technical-specs
Bhagyashree R
29 Aug 2019
3 min read
Save for later

Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs

Bhagyashree R
29 Aug 2019
3 min read
Yesterday, Microsoft announced that it supports the addition of its Extended File Allocation Table (exFAT) file system in the Linux kernel and publicly released its technical specifications. https://twitter.com/OpenAtMicrosoft/status/1166742237629308928 Launched in 2006, the exFAT file system is the successor to Microsoft's FAT and FAT32 file systems that are widely used in a majority of flash memory storage devices such as USB drives and SD cards. It uses 64-bits to describe file size and allows for clusters as large as 32MB. As per the specification, it was implemented with simplicity and extensibility in mind. John Gossman, Microsoft Distinguished Engineer, and Linux Foundation Board Member wrote in the announcement, “exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.” As exFAT was proprietary previously, mounting these flash drives and cards on Linux machines required installing additional software such as FUSE-based exFAT implementation. Supporting exFAT in the Linux kernel will provide users its full-featured implementation and can also be more performant as compared to the FUSE implementation. Also, its inclusion in OIN's Linux System Definition will allow its cross-licensing in a royalty-free manner. Microsoft shared that the exFAT code incorporated into the Linux kernel will be licensed under GPLv2. https://twitter.com/OpenAtMicrosoft/status/1166773276166828033 In addition to supporting exFAT in the Linux kernel, Microsoft also hopes that its specifications become a part of the Open Invention Network’s (OIN) Linux definition. Keith Bergelt, OIN's CEO, told ZDNet, "We're happy and heartened to see that Microsoft is continuing to support software freedom. They are giving up the patent levers to create revenue at the expense of the community. This is another step of Microsoft's transformation in showing it's truly committed to Linux and open source." The next edition of the Linux System Definition is expected to publish in the first quarter of 2020, post which any member of the OIN will be able to use exFAT without paying a patent royalty. The Linux Foundation also appreciated Microsoft's move to bring exFAT in the Linux kernel: https://twitter.com/linuxfoundation/status/1166744195199115264 Other developers also shared their excitement. A Hacker News user commented, “OMG, I can't believe we finally have a cross-platform read/write disk format. At last. No more Fuse. I just need to know when it will be available for my Raspberry Pi.” Read the official announcement by Microsoft to know more in detail. Microsoft Edge Beta is now ready for you to try Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms CERN plans to replace Microsoft-based programs with an affordable open-source software
Read more
  • 0
  • 0
  • 16918

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 16904

article-image-to-create-effective-api-documentation-know-how-developers-use-it-says-acm
Bhagyashree R
19 Jul 2019
5 min read
Save for later

To create effective API documentation, know how developers use it, says ACM

Bhagyashree R
19 Jul 2019
5 min read
Earlier this year, the Association for Computing Machinery (ACM) in its January 2019 issue of Communication Design Quarterly (CDQ), talked about how developers use API documentation when getting into a new API and also suggested a few guidelines for writing effective API documentation. Application Programming Interfaces (APIs) are standardized and documented interfaces that allow applications to communicate with each other, without having to know how they are implemented. Developers often turn to API references, tutorials, example projects, and other resources to understand how to use them in their projects. To support the learning process effectively and write optimized API documentation, this study tried to answer the following questions: Which information resources offered by the API documentation developers use and to what extent? What approaches developers take when they start working with a new API? What aspects of the content hinders efficient task completion? API documentation and content categories used in the study The study was done on 12 developers (11 male and 1 female), who were asked to solve a set of pre-defined tasks using an unfamiliar public API. To solve these tasks, they were allowed to refer to only the documentation published by the API provider. The participants used the API documentation about 49% of the time while solving the task. On an individual level, there was not much variation, with the means for all but two participants ranging between 41% and 56%. The most used content category was API reference, followed by the Recipes page. The aggregate time spent on both Recipes and Samples categories was almost equal to the time spent on the API reference category. The Concepts page, however, was used less often as compared to the API reference. Source: ACM “These findings show that the API reference is an important source of information, not only to solve specific programming issues when working with an API developers already have some experience with, but even in the initial stages of getting into a new API, in line with Meng et al. (2018),” the study concludes. How do developers learn a new API The researchers observed two different problem-solving behaviors that were very similar to the opportunistic and systematic developer personas discussed by Clarke (2007). Developers with the opportunistic approach tried to solve the problem in an “exploratory fashion”. They were more intuitive, open to making errors, and often tried solutions without double-checking in the documentation. This group was the one who does not invest much time to get a general overview of the API before starting with the first task. Developers from this group prefer fast and direct access to information instead of large sections of the documentation. On the contrary, developers with the systematic approach tried to first get a deeper understanding of the API before using it. They took some time to explore the API and prepare the development environment before starting with the first task. This group of developers attempted to follow the proposed processes and suggestions closely. They were also able to notice parts of the documentation that were not directly relevant to the given task. What aspects of API documentation make it hard for developers to complete tasks efficiently? Lack of transparent navigation and search function Some participants felt that the API documentation lacked a consistent system of navigation aids and did not offer side navigation including within-page links. Developers often required a search function when they were missing a particular piece of information, such as a term they did not know. As the documentation used in the test did not offer a search field, developers had to use a simple page search instead, which was often unsuccessful. Issues with high-level structuring of API documentation The participants observed several problems in the high-level structuring of the API documentation, that is, the split of information in Concepts, Samples, API reference, and so on. For instance, to search for a particular piece of information, participants sometimes found it difficult to decide which content category to select. It was particularly unclear how the content provided in the Samples and Recipes were distinct. Unable to reuse code examples Most of the time participants developed their solution using the sample code provided in the documentation. However, the efficient use of sample code was hindered because of the presence of placeholders in the code referencing some other code example. Few guidelines for writing efficient API documentation Organizing the content according to API functionality: The API documentation should be divided into categories that reflect the functionality or content domain of the API. So participants would have found it more convenient if instead of dividing documentation into “Samples,” “Concepts,” “API reference” and “Recipes,” the API used categories such as “Shipment Handling,” “Address Handling” and so on. Enabling efficient access to relevant content: While designing API documentation, it is important to take specific measures for improved accessibility to content that is relevant to the task at hand. This can be done by organizing the content according to API functionality, presenting conceptual information integrated with related tasks, and providing transparent navigation and powerful search function. Facilitating initial entry into the API: For this, you need to identify appropriate entry points into the API and relate particular tasks to specific API elements. Provide clean and working code examples, provide relevant background knowledge, and connect concepts to code. Supporting different development strategies: While creating the API documentation, you should also keep in mind the different strategies that developers adopt when approaching a new API. Both the content and the way it is presented should serve the needs of both opportunistic and systematic developers. These were some observations and implications from the study. To know more, read the paper: How Developers Use API Documentation: An Observation Study. GraphQL API is now generally available Best practices for RESTful web services: Naming conventions and API Versioning [Tutorial] Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times
Read more
  • 0
  • 0
  • 16902

article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 16893

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 16889
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introducing-tensorflow-graphics-packed-with-tensorboard-3d-object-transformations-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team at TensorFlow introduced TensorFlow Graphics. A computer graphics pipeline requires 3D objects and their positioning in the scene, and a description of the material they are made of, lights and a camera. This scene description then gets interpreted by a renderer for generating a synthetic rendering. In contrast, a computer vision system starts from an image and then tries to infer the parameters of the scene. This also allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation. Developers usually require large quantities of data to train machine learning systems that are capable of solving these complex 3D vision tasks.  As labelling data is a bit expensive and complex process, so it is better to have mechanisms to design machine learning models. They can easily comprehend the three dimensional world while being trained without much supervision. By combining computer vision and computer graphics techniques we get to leverage the vast amounts of unlabelled data. For instance, this can be achieved with the help of analysis by synthesis where the vision system extracts the scene parameters and the graphics system then renders back an image based on them. In this case, if the rendering matches the original image, which means the vision system has accurately extracted the scene parameters. Also, we can see that in this particular setup, computer vision and computer graphics go hand-in-hand. This also forms a single machine learning system which is similar to an autoencoder that can be trained in a self-supervised manner. Image source: TensorFlow We will now explore some of the functionalities of TensorFlow Graphics. Object transformations Object transformations are responsible for controlling the position of objects in space. The axis-angle formalism is used for rotating a cube and the rotation axis points up to form a positive which leads the cube to rotate counterclockwise. This task is also at the core of many applications that include robots that focus on interacting with their environment. Modelling cameras Camera models play a crucial role in computer vision as they influence the appearance of three-dimensional objects projected onto the image plane. For more details about camera models and a concrete example of how to use them in TensorFlow, check out the Colab example. Material models Material models are used to define how light interacts with objects to give them their unique appearance. Some materials like plaster and mirrors usually reflect light uniformly in all directions. Users can now play with the parameters of the material and the light to develop a good sense of how they interact. TensorBoard 3d TensorFlow Graphics features a TensorBoard plugin to interactively visualize 3d meshes and point clouds. Through which visual debugging is also possible that helps to assess whether an experiment is going in the right direction. To know more about this news, check out the post on Medium. TensorFlow 1.13.0-rc2 releases! TensorFlow 1.13.0-rc0 releases! TensorFlow.js: Architecture and applications  
Read more
  • 0
  • 0
  • 16799

article-image-facebook-ai-introduces-aroma-a-new-code-recommendation-tool-for-developers
Natasha Mathur
09 Apr 2019
3 min read
Save for later

Facebook AI introduces Aroma, a new code recommendation tool for developers

Natasha Mathur
09 Apr 2019
3 min read
Facebook AI team announced a new tool, called Aroma, last week. Aroma is a code-to-code search and recommendation tool that makes use of machine learning (ML) to simplify the process of gaining insights from big codebases. Aroma allows engineers to find common coding patterns easily by making a search query without any need to manually browse through code snippets. This, in turn, helps save time in their development workflow. So, in case a developer has written code but wants to see how others have implemented the same code, he can run the search query to find similar code in related projects. After the search query is run, results for codes are returned as code ‘recommendations’. Each code recommendation is built from a cluster of similar code snippets that are found in the repository. Aroma is a more advanced tool in comparison to the other traditional code search tools. For instance, Aroma performs the search on syntax trees. Instead of looking for string-level or token-level matches, Aroma can find instances that are syntactically similar to the query code. It can then further highlight the matching code by cutting down the unrelated syntax structures. Aroma is very fast and creates recommendations within seconds for large codebases. Moreover, Aroma’s core algorithm is language-agnostic and can be deployed across codebases in Hack, JavaScript, Python, and Java. How does Aroma work? Aroma follows a three-step process to make code recommendations, namely, Feature-based search, re-ranking and clustering, and intersecting. For feature-based search, Aroma indexes the code corpus as a sparse matrix. It parses each method in the corpus and then creates its parse tree. It further extracts a set of structural features from the parse tree of each method. These features capture information about variable usage, method calls, and control structures. Finally, a sparse vector is created for each method according to its features and then the top 1,000 method bodies whose dot products are highest are retrieved as the candidate set for the recommendation. Aroma In the case of re-ranking and clustering, Aroma first reranks the candidate methods by their similarity to the query code snippet. Since the sparse vectors contain only abstract information about what features are present, the dot product score is an underestimate of the actual similarity of a code snippet to the query. To eliminate that, Aroma applies ‘pruning’ on the method syntax trees. This helps to discard the irrelevant parts of a method body and helps retain all the parts best match the query snippet. This is how it reranks the candidate code snippets by their actual similarities to the query. Further ahead, Aroma runs an iterative clustering algorithm to find clusters of code snippets similar to each other and consist of extra statements useful for making code recommendations. In the case of intersecting, a code snippet is taken first as the “base” code and then ‘pruning’ is applied iteratively on it with respect to every other method in the cluster. The remaining code after the pruning process is the code which is common among all methods, making it a code recommendation. “We believe that programming should become a semiautomated task in which humans express higher-level ideas and detailed implementation is done by the computers themselves”, states Facebook AI team. For more information, check out the official Facebook AI blog. How to make machine learning based recommendations using Julia [Tutorial] Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset
Read more
  • 0
  • 0
  • 16752

article-image-llvm-will-be-relicensing-under-apache-2-0-start-of-next-year
Prasad Ramesh
18 Oct 2018
3 min read
Save for later

LLVM will be relicensing under Apache 2.0 start of next year

Prasad Ramesh
18 Oct 2018
3 min read
After efforts since last year, LLVM, the set of compiler building tools is closer towards an Apache 2.0 license. Currently, the project has its own open source licence created by the LLVM team. This is a move to go forward with Apache 2.0 based on the mailing list discussions. Why the shift to Apache 2.0? The current licence is a bit vague and was not very welcoming to contributors and had some patent issues. Hence, they decided to shift to the industry standard Apache 2.0. The new licence was drafted by Heather Meeker, the same lawyer who worked on the Commons Clause. The goals of the relicensing as listed on their website are: Encourage ongoing contributions to LLVM by preserving a low barrier to entry for contributors. Protect users of LLVM code by providing explicit patent protection in the license. Protect contributors to the LLVM project by explicitly scoping their patent contributions with this license. Eliminate the schism between runtime libraries and the rest of the compiler that makes it difficult to move code between them. Ensure that LLVM runtime libraries may be used by other open source and proprietary compilers. The plan to shift LLVM to Apache 2.0 The relicence is not just Apache 2.0, the license header reads “Apache License v2.0 with LLVM Exceptions”. The exceptions are related to compiling source code. To know more about the exceptions follow the mailing list. The team plans to install the new license and the developer policy that references the new and old licenses. At this point, all subsequent contributions will be under both these licenses. They have a two-fold plan to ensure the contributors are aware. They’re going to ask many active contributors (both enterprises and individuals) to explicitly sign an agreement to relicense their contributions. Signing will make the change clear and known while also covering historical contributions. For any other contributors, their commit access will be revoked until the LLVM organization can confirm that they are covered by one of the agreements. The agreements For the plan to work, both individuals and companies need to sign an agreement to relicense. They have built a process for both companies and individuals. Individuals Individuals will have to fill out a form with the necessary information like email addresses, potential employers, etc. to effectively relicense your contributions. The form contains a link to a DocuSign agreement to relicense any of your individual contributions under the new license. Signing the document will make things easier as it will avoid confusion in contributions and if it is covered by some company. The form and agreement is available on Google forms. Companies There is a DocuSign agreement for companies too. Some companies like Argonne National Laboratory and Google have already signed the agreement. There will be no explicit copyright notice as they don’t feel it is worthwhile. The current planned timeline is to install the new developer policy and the new license after LLVM 8.0 release in January 2019. For more details, you can read the mail. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package
Read more
  • 0
  • 0
  • 16743

article-image-introducing-activestate-state-tool-a-cli-tool-to-automate-dev-test-setups-workflows-share-secrets-and-manage-ad-hoc-tasks
Amrata Joshi
29 Aug 2019
3 min read
Save for later

Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks

Amrata Joshi
29 Aug 2019
3 min read
Today, the team at ActiveState, a software-based company known for building Perl, Python and Tcl runtime environments introduced the ActiveState Platform Command Line Interface (CLI), the State Tool. This new CLI tool aims at automating manual tasks such as the setup of development and test systems. With this tool, all instructions in the Readme can easily be reduced to a single command. How can the State Tool benefit the developers? Eases ad-hoc tasks The State Tool can address tasks that cause trouble to developers at project setup or environment setups that don’t work the first time. It also helps developers in managing dependencies, system libraries and other such tasks that affect productivity. These tasks usually end up consuming developers’ coding time. The State Tool can be used to automate all of the ad hoc tasks that developers come across on a daily basis.  Deployment of runtime environment With this tool, developers can now deploy a consistent runtime environment into a virtual environment on their machine and across CI/CD systems with a single command. Sharing secrets and cross-platform scripts Developers can now centrally create secrets that can be securely shared among team members without the need of using a password manager, email, or Slack. They can create and share cross-platform scripts that include secrets for starting off the builds and run tests as well as simplifying and speeding up common development tasks. Developers can incorporate their secrets in the scripts by simply referencing their names. Automation of workflows All the workflows that developers handle can now get centrally automated with this tool. Jeff Rouse, vice president, product management, said in a statement, “Developers are a hardy bunch. They suffer through a thousand annoyances at project startup/restart time, but soldier on anyway. It’s just the way things have always been done. With the State Tool, it doesn’t have to stay that way. The State Tool addresses all the hidden costs in a project that sap developer productivity. This includes automating environment setup to secrets sharing, and even automating the day to day scripts that everyone counts on to get their jobs done. Developers can finally stop solving the same annoying problems over and over again, and just rely on the State Tool so they can spend more time coding.”   To know more about this news, check out the official page.  Podcasting with Linux Command Line Tools and Audacity GitHub’s ‘Hub’ command-line tool makes using git easier Command-Line Tools  
Read more
  • 0
  • 0
  • 16586
article-image-salesforce-spring-18-new-features
Richa Tripathi
25 Apr 2018
3 min read
Save for later

Salesforce Spring 18 - New features to be excited about in this release!

Richa Tripathi
25 Apr 2018
3 min read
Salesforce has welcomed spring with their new Salesforce Spring 2018 release. With this release, Salesforce users, admins, and developers can try out some fresh features and tools to enhance, tweak, and guide the processes that govern our Salesforce instances. This release brings in exciting enhancements to the Lightning Platform and advanced developments in artificial intelligence. Without further ado, let’s have a quick look at some of the noteworthy features that are really going to change the way you work with Salesforce. Create personalized navigation in Lightning Experience This new Lightning Experience feature in Salesforce allows one to reorder items, rename, or remove the added items. The navigation bar now contains more than just object-level items. One can add granular items, like a dashboard, list, or record, and so on. Build Interactive Salesforce Surveys Creating beautiful, easy-to-use forms for collecting feedback and data from users or customers is now easy. All survey data is stored in your org which is nothing but an entity that contains users, data, and automation corresponding to an individual organization. This unified data storage is helpful especially in creating reports, dashboards and sharing insights within your organization. Customize your Org to match your brand using themes The ability to customise the look and feel of Salesforce has never really been available. With Spring ’18, create up to 300 custom themes or clone the built-in themes provided by Salesforce with just a few clicks. Easy calculation of Opportunity Scoring using Einstein One can prioritize their way to more business with Einstein Opportunity Scoring. Einstein Opportunity Scoring generates opportunity scores based on the record details, history and related activities of the opportunity and related account. Information about the opportunity owner, such as yearly win rates, is also used to calculate the score. Stay on Top of Duplicate Records by Using Duplicate Jobs Duplicates are a pain for most organisations. With the relatively recent release of duplicate and matching rules in Salesforce, creating duplicate jobs with standard or custom matching rules to scan Salesforce business or person accounts, contacts or leads for duplicates has become easy. Job results can be shared with others and information about the duplicate jobs are logged. These logs helps you in tracking your progress in reducing the number of duplicate records in Salesforce org. Easy storage of Data Privacy Preferences Data privacy records, based on the Individual object, lets one store certain data privacy preferences for their customers. These records can help honor and respect customers’ wishes when they request only specific forms of contact from one’s company. Some laws and regulations, such as the General Data Protection Regulation (GDPR), can require you to honor your customers’ wishes. See More Relevant Objects First in Top Results Top Results list the most relevant results for most frequently used objects. The improved ordering of objects means less scrolling and clicking around to reach the object one wants. To know more about these and other releases in detail, visit the Salesforce Blog. Read More Implementing Automation Process with Salesforce CRM Build a custom Admin Home page in Salesforce CRM Lightning Experience Getting Started with Salesforce Lightning Experience
Read more
  • 0
  • 0
  • 16535

article-image-githubs-hub-command-line-tool-makes-using-git-easier
Bhagyashree R
08 Jul 2019
3 min read
Save for later

GitHub's 'Hub' command-line tool makes using git easier

Bhagyashree R
08 Jul 2019
3 min read
GitHub introduced ‘Hub’ that extends git command-line with extra functionality to enable developers complete their everyday GitHub tasks right from the terminal. Hub does not have any dependencies, but as it is designed to wrap git, it is recommended to have at least git 1.7.3 or newer.  Hub provides both new and some extended version of commands that already exist in git. Here are some of them: hub-am: Used to replicate commits locally from a GitHub pull request.  hub-cherry-pick: Allows cherry-picking a commit from a fork on GitHub. hub-alias: Used to show shell instructions for wrapping git.  hub-browse: Used to open a GitHub repository in a web browser. hub-create: Used to create a new repository on GitHub and add a git remote for it. hub-fork: Allows forking the current repository on GitHub and adds a git remote for it. You can see the entire list of commands on the Hub Man Page. Most of these commands are expected to be run in a context of an existing local git repository. What are the advantages of using Hub Contributing to open source: This tool makes contributing to open source much easier by providing features for fetching repositories, navigating project pages, forking repos, and even submitting pull requests, all from the command-line. Script your workflows: You can easily script your workflows and set priorities by listing and creating issues, pull requests, and GitHub releases. Easily maintain projects: It allows you to easily fetch from other forks, review pull requests, and cherry-pick URLs. Use GitHub for work: It saves your time by allowing you to open pull requests for code reviews and push to multiple remotes at once. It also supports GitHub Enterprise, however, it needs to be whitelisted.  Hub is not the only tool of its kind, there are tools like Magit Forge and Lab. Though developers think that it is convenient, some feel that it increases GitHub lock-in. "While it is pretty cool, using such tool increases general lock-in to GitHub, in terms of both habits and potential use of it for automation of processes," a user expressed its opinion on Hacker News.  Another Hacker News user suggested, “I wish there was an open standard for operations that hub allows to do and all major Git forges, including open source ones, such as Gogs/Gitea and GitLab, supported it. In that case having a command-line tool that, like Git itself, is not tied to a particular vendor, but allows to do what hub does, could have been indispensable.” To know more in detail, check out Hub’s GitHub repository. Pull Panda is now a part of GitHub; code review workflows now get better! Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 16374

article-image-gitlab-11-7-releases-with-multi-level-child-epics-api-integration-with-kubernetes-search-filter-box-and-more
Amrata Joshi
23 Jan 2019
5 min read
Save for later

GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more

Amrata Joshi
23 Jan 2019
5 min read
Yesterday, the team at Gitlab released GitLab 11.7, an application for the DevOps lifecycle that helps the developer teams work together efficiently to secure their code. GitLab 11.7 comes with features like multi-level child epics, API integration with Kubernetes, cross-project pipeline and more. What’s new in GitLab 11.7 Managing releases with GitLab 11.7 This version of GitLab eliminates the need for manual collection of source code, build output, or metadata associated with a released version of the source code. GitLab 11.7 comes with releases in GitLab Core which helps users to have release snapshots that include the source code and related artifacts. Multi-level child epics for work breakdown structures This release comes with multi-level child epics in GitLab portfolio management which allow users to create multi-level work breakdown structures. It also helps in managing complex projects and work plans. This structure builds a direct connection between planning and actionable issues. Users can now have an epic containing both issues and epics. Streamlining JavaScript development with NPM registries This release also delivers NPM registries in GitLab Premium that provides a standard and secure way to share and version control NPM packages across projects. Users can then share a package-naming convention for utilizing libraries in any Node.js project and NPM. Remediating vulnerabilities GitLab 11.7 helps users to remediate vulnerabilities in the apps and suggest a solution for Node.js projects managed with Yarn. Users can download a patch file, and apply it to their repo using the git apply command. They can then push changes back to their repository and the security dashboard will then confirm if the vulnerability is gone. This process is easy and reduces the time required to deploy a solution. API integration with Kubernetes This release comes with API support to Kubernetes integration. All the actions that are available in the GUI currently, such as listing, adding, and deleting a Kubernetes cluster are now accessible with the help of the API. Developers can use this feature to fold in cluster creation as part of their workflow. Cross-project pipeline With this release, it is now possible to expand upstream or downstream cross-project pipelines from the pipeline view. Users can view the pipelines across projects. Search filter box for issue board navigation This release comes with a search filter that makes navigation much easier. Users can simply type a few characters in the search filter box to narrow down to the issue board they are interested in. Project list redesign Project list UI is redesigned in GitLab 11.7 and mainly focuses on readability and summary of the project’s activity. Import issues CSV This release makes transitions easier. Users can now import issues into GitLab while managing their existing work. This feature works with Jira or any other issue tracking system that can generate a CSV export. Support catch-all email mailboxes This release supports sub-addressing and catch-all email mailboxes with a new email format that allows more email servers to be used with GitLab, including Microsoft Exchange and Google Groups. Include CI/CD files from other projects and templates With this release, users can now include their snippets of configuration from other projects and predefined templates. This release also includes snippets for specific jobs, like sast or dependency_scanning, so users can use them instead of copying and pasting the current definition. GitLab Runner 11.7 The team at GitLab also released GitLab Runner 11.7 yesterday. It is an open source project that is used to run CI/CD jobs and send the results back to GitLab. Major improvements In GitLab 11.7, the performance of viewing merge requests has been improved by caching syntax highlighted discussion diffs. Push performance has been improved by skipping pre-commit validations that have passed on other branches. Redundant counts in snippets search have been removed. This release comes with Mattermost 5.6, an open source Slack-alternative that includes interactive message dialogs, new admin tools, Ukrainian language support, etc. Users are generally happy with GitLab 11.7 release. One of the users who has been using GitLab for quite some time now is waiting for MR[0]. They commented on Hacker News, “I'm impatiently waiting for this MR [0] that will allow dependant containers to also talk to each other. It's the last missing piece for my ideal CI setup.” To which, GitLab’s product manager for Verify (CI) replied, “Thanks for bringing this up I hadn't seen your contribution! I think this is a great idea. I know the technical team has been overwhelmed with community contributions as of late - which is a good problem to have but one that we're still solving. I'm going to try and shepherd this one along myself.” Some users think if GitLab can pull off the npm registry well, then this might prove to be the beginning of a universal package management server built into Gitlab. One of the comments reads, “Gitlab API is amazingly simple and flexible, can be used efficiently from the terminal to list CI jobs, your issues, edit them.” Users are also comparing GitLab with GitHub, where some users are supporting GitHub. One user commented, “GitLab’s current homepage hides their actual site (the repositories) and makes it hard as a developer to actually get started compared to Github.” Another user commented, “We've started using Gitlab where I work and it's so much better than GitHub.” Users are also facing issues with memory optimization. One of the comments reads, “I like GitLab but noticed my Docker container running it is steadily requiring more memory to run smoothly. It’s sitting at 12GB right now, which is a little too high for my taste. I wish there were ways to reduce this.” Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 16356
article-image-meet-sublime-merge-a-new-git-client-from-the-makers-of-sublime-text
Prasad Ramesh
21 Sep 2018
3 min read
Save for later

Meet Sublime Merge, a new Git client from the makers of Sublime Text

Prasad Ramesh
21 Sep 2018
3 min read
The makers of Sublime Text have released a new Git client yesterday. Called Sublime Merge, this tool combines the user interface of Sublime Text, with a from-scratch implementation of Git. The result is a Git client with a better and familiar interface. Sublime Merge has no time limit, no metrics, and with no tracking done on your usage. It has two themes, light and dark. The evaluation version is fully functional, but does not have the dark theme. You don’t need an account for the evaluation version. Here are some of the features of Sublime Merge. An integrated merge tool An integrated merge tool allows resolving conflicts in Sublime Merge itself instead of having to open another editor. There is a 3-pane view for viewing conflicts. The changes done by you are on the left, and by others, on the right. The resolved text is displayed on a pane in the center with buttons to choose between what changes to accept. Advanced diffs For cases where necessary, Sublime Merge will display exactly which individual characters have been changed for a commit. This includes renames, moves, resolving conflicts or just looking at the commit history. It can be done simply by selecting any two commits in Sublime Merge with Ctrl+Left Mouse to show the diff between them. Key bindings There are also good keyboard usability options. The Tab key can be used to navigate through different parts of the application. Space bar can toggle expansion, and Enter can stage/unstage hunks. The Command Palette allows quick access to a large set of Git commands and is triggered by Ctrl+P. Command line integration Sublime Merge will work hand-in-hand with the command line. All repository changes are updated live and things work the same from the command line as they would from the UI. So either the GUI or the command line can be used for different functions, the choice is yours. The smerge tool that comes with Sublime Merge can be used to open repositories, blame files, and search for commits. Advanced search Sublime Merge features find-as-you-type search to find the commit with exact matches. You can search for commit messages, commit authors, file names, and even wildcard patterns. Complex search queries can also be constructed using ‘and’, ‘or’, and ‘()’ symbols for deep searches within folders. Use of real Git Working with Sublime Merge means you're working with the real Git, and not just a simplified version. Hovering over the buttons will show you which command it will run. Sublime Merge uses the same lingo as Git, and it doesn't make use of any state beyond Git itself. It uses a custom implementation of Git for reading repositories that drives high performance functionalities. However, Git itself, is directly used in Sublime Merge for repository mutating operations like staging, committing, checking out branches, etc. Downloads and licence Individual licences are lifetime with three years of updates included. For business licenses, subscription is available. Sublime Merge is in its early stages and has only been used by the makers and a small team of beta testers. Now they have invited other users to try it as well. You can download and read more about the Git Client from the Sublime Merge website. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more GitHub introduces ‘Experiments’, a platform to share live demos of their research projects Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 16263

article-image-a-recap-of-the-linux-plumbers-conference-2019
Vincy Davis
17 Sep 2019
4 min read
Save for later

A recap of the Linux Plumbers Conference 2019

Vincy Davis
17 Sep 2019
4 min read
This year’s Linux Plumbers Conference concluded on the 11th of September 2019. This invitation-only conference for Linux top kernel developers was held in Lisbon, Portugal this year. The conference brings developers working on the plumbing of Linux - kernel subsystems, core libraries, windowing systems, etc. to think about core design problems. Unlike most tech conferences that generally discuss the future of the Linux operating system, the Linux Plumbers Conference has a distinct motive behind it. In an interview with ZDNet, Linus Torvalds, the Linux creator said, “The maintainer summit is really different because it doesn't even talk about technical issues. It's all about the process of creating and maintaining the Linux kernel.” In short, the developers attending the conference know confidential and intimate details about some of the Linux kernel subsystems, and maybe this is why the conference has the word ‘Plumbers’ in it. Read Also: Introducing kdevops, a modern DevOps framework for Linux kernel development The conference is divided into several working sessions focusing on different plumbing topics. This year the Linux Plumbers Conference had over 18 microconferences, with topics like RISC-V, tracing, distribution kernels, live patching, open printing, toolchains, testing and fuzzing, and more. Some Micro conferences covered in Linux Plumbers Conference 2019 The Linux Plumbers 2019 RISC-V MC (microconference) focussed on finding the solutions for changing the kernel. In the long run, this discussion of changing the kernel is expected to result in active developer participation for code review/patch submissions for a better and more stable kernel for RISC-V. Some of the topics covered in RISC-V MC included RISC-V platform specification progress and fixing the Linux boot process in RISC-V. The Plumbers Live Patching MC had an open discussion for all the involved stakeholders to discuss the live patching related issues such that it will help in making the live patching of the Linux kernel and the Linux userspace live patching feature complete. This open discussion has been a success in past conferences as it leads to useful output which helps in pushing the development of the live patching forward. Some of the topics included all the happenings in kernel live in the last one year, API for state changes made by callbacks and source-based livepatch creation tooling. The System Boot and Security MC concentrated on open source security, including bootloaders, firmware, BMCs and TPMs. The potential speakers and key participants for the MC had everybody interested in GRUB, iPXE, coreboot, LinuxBoot, SeaBIOS, UEFI, OVMF, TianoCore, IPMI, OpenBMC, TPM, and other related projects and technologies. The main goal of this year’s Remote Direct Memory Access (RDMA) MC was to resolve the open issues in RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics, RDMA and DAX, contiguous system memory allocations for userspace which is unresolved from 2017 and many more. Other areas of interest included multi-vendor virtualized 'virtio' RDMA, non-standard driver features and their impact on the design of the subsystem, and more. Read Also: Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range Linux developers who attended the Plumbers 2019 conference were appreciative of the conference and took to Twitter to share their experiences. https://twitter.com/russelldotcc/status/1172193214272606209 https://twitter.com/odeke_et/status/1173108722744225792 https://twitter.com/jwboyer19/status/1171351233149448193 The videos of the conference are not out yet. The team behind the conference has tweeted that they will be uploading them soon. Keep checking this space for more details about the Linux Plumbers Conference 2019. Meanwhile, you can check out last year’s talks on YouTube. Latest news in Linux Lilocked ransomware (Lilu) affects thousands of Linux-based servers Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation
Read more
  • 0
  • 0
  • 16192
Modal Close icon
Modal Close icon