Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-now-you-can-run-nginx-on-wasmjit-on-all-posix-systems
Natasha Mathur
10 Dec 2018
2 min read
Save for later

Now you can run nginx on Wasmjit on all POSIX systems

Natasha Mathur
10 Dec 2018
2 min read
Wasmjit team announced last week that you can now run Nginx 1.15.3, a free and open source high-performance HTTP server and reverse proxy, in user-space on all POSIX system. Wasmjit is a small embeddable WebAssembly runtime that can be easily ported to most environments. It primarily targets a Linux kernel module capable of hosting Emscripten-generated WebAssembly modules. It comes equipped with a host environment for running in user-space on POSIX systems. This allows you to run WebAssembly modules without having to run an entire browser. Getting Nginx to run had been a major goal for the wasmjit team ever since its first release in late July. “While it might be convenient to run the same binary on multiple systems without modification (“write once, run anywhere”), this goal was chosen because IO-bound / system call heavy servers to stand to gain the most by running in kernel space. Running FUSE file systems in kernel space is another motivating use case that Wasmjit will soon support”, mentions the wasmjit team. Other future goals for wasmjit includes introduction of an interpreter, rust-runtime for Rust-generated wasm files, Go-runtime for Go-generated wasm files, optimized x86_64 JIT ,arm64 JIT, and macOS kernel module. Wasmjit running nginx has been tested on Linux, OpenBSD, and macOS so far. The complete compiled version of nginx without any modifications and with multi-process capability has been used. All the complex parts of the POSIX API that are needed for proper implementation of Nginx have been used such as signal handling and forking. That being said, Kernel space support still needs working as Emscripten delegates some large APIs such as getaddrinfo() and strftime() to the host implementation. These need to be re-implemented in the kernel. Moreover, kernel space versions of fork(), execve(), and signal handling also need to be implemented. Also, Wasmjit is currently in alpha-level software in development and might lead to unpredictable substances when used in production. Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Getting Started with Nginx
Read more
  • 0
  • 0
  • 15312

article-image-npm-v6-is-out
Sugandha Lahoti
02 May 2018
2 min read
Save for later

npm v6 is out!

Sugandha Lahoti
02 May 2018
2 min read
After the recent release of Node 10.0.0, npm have released version 6 in collaboration with node.js. npm v6 is a major update of the popular package manager for the JavaScript runtime environment Node.js. Typically, npm release their newer versions every year around spring time and following this pattern npm v6 was introduced as on April 26, 2018. This update introduces powerful security features for every developer who works with open source code. Built in security features npm v6 is the result of the collaboration between npm and their acquisition of the Node Security Platform. This introduces two new security features: npm registry Every user of the npm v6 Registry will begin receiving automatic warnings if the code used has a known security issue. npm will automatically review install requests against the NSP database and return a warning if the code contains a vulnerability. npm audit npm v6, has a new command, ‘npm audit’, which allows developers to recursively analyze their dependency trees to identify specific insecurities, following which developers can swap in a new version or find a safer alternate dependency. Both these security features are available free of charge to every npm user, with no purchase or registration required. These resources are open sourced to maximize the community benefit. By alerting the entire community to security vulnerabilities within a tool, npm can make JavaScript development safer for everyone. Additional Features Apart from the security features, there are also a large number of other performance updates: npm v6 is up to 17x faster than the npm of one year ago. npm ci is optimized to use npm within the continuous integration/continuous deployment (CI/CD) workflow almost 2x–3x faster. Webhooks are now configurable directly within the npm CLI. Easy verification of package with respect to tampering and corruption, with more visibly integrated metadata. Teams can now more easily share reproducible builds with automatic resolution of lockfile conflicts. Also checkout the release notes for npm v6 release, and the roadmap of the year ahead. Node 10.0.0 released, packed with exciting new features How is Node.js Changing Web Development? How to deploy a Node.js application to the web using Heroku
Read more
  • 0
  • 0
  • 15306

article-image-microsoft-open-sources-web-template-studio-a-vs-code-extension-to-easily-create-full-stack-web-apps
Bhagyashree R
16 May 2019
3 min read
Save for later

Microsoft open sources Web Template Studio, a VS Code extension to easily create full-stack web apps

Bhagyashree R
16 May 2019
3 min read
At Build 2019, Microsoft showcased Web Template Studio (WebTS), a cross-platform Visual Studio Code extension, which is built by a team of Microsoft Garage interns. Yesterday, the tech giant open sourced the extension under the MIT license and announced its availability on VS Marketplace. The Visual Studio Code extension is currently only available in preview form. Explaining the vision behind developing this extension, Kelly Ng, one of the Software engineering intern who helped build it said, “A lot of times in a hackathon, you spend the whole hackathon just setting all of that up before you can start programming. With our tool, you can hook everything up in just 5 or 6 minutes.” What is Microsoft Web Template Studio? Written in TypeScript and React, Microsoft WebTS allows developers to easily create new web applications with the help of its “dev-friendly wizard”. It is built along the same lines of a Visual Studio extension, Windows Template Studio, which simplifies and accelerates the creation of Universal Windows Platform (UWP) apps. With this extension, you can generate boilerplate code for a full-stack web application by selecting your choice of front-end frameworks, back-end frameworks, pages, and cloud services. Right now, WebTS only supports React.js for frontend and Node.js for backend. In the future, the team plans to add more frameworks like Angular and Vue. The extension comes with various app page templates including blank page, common layouts, and pages that implement common patterns like grid or list. You just need to choose from these pages to add a common UI into your web app. Once you are done doing all that, you just need to specify which Azure cloud services you want to use for your project. Currently, the extension supports Azure Cosmos DB for storage and Azure Functions for compute. If you want to use the extension, just head over to Visual Studio Marketplace’s Web Template Studio page and click install. The project is still in its initial stages and the team plans to support more frameworks and services as it grows with the help of the community. In case you want to contribute, check out its GitHub repository. You can read the full announcement at Microsoft Blog. Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity Microsoft Build 2019: Introducing Windows Terminal, application packed with multiple tab opening, improved text and more Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group
Read more
  • 0
  • 0
  • 15290

article-image-oracle-apex-18-1-is-here
Natasha Mathur
31 May 2018
4 min read
Save for later

Oracle Apex 18.1 is here!

Natasha Mathur
31 May 2018
4 min read
Oracle announced the much awaited Oracle Apex 18.1 today. Oracle Application Express is a free development tool by Oracle. It allows developers to create web-based applications quickly by using a web browser on an Oracle database. With Oracle Apex 18.1, Oracle provides easy integration of data from the REST services with data taken from the SQL queries within an Oracle database to build scalable applications. The new release also includes high-quality features for creating applications without the need of coding. Let’s have a look at some of the major features and improvements in Oracle Apex 18.1. Key features and updates Application features High-level application features such as access control, email reporting, feedback, activity reporting, dynamic user interface selection, etc, can be added to your app. An application can also be created with “cards” report interface, a timeline report as well as a dashboard. REST enabled SQL support Apex 18.1 allows you to build charts, calendars, reports, and trees. You can also invoke certain processes against Oracle REST Data Services (ORDS) -provided REST Enabled SQL Services. There is no need for a database link to include data from other remote database objects within your APEX application. REST Enabled SQL gets it all done for you. Web Source Modules Different REST endpoints can be used to declaratively access data such as ordinary REST data feeds, REST services from Oracle REST data services as well as Oracle Cloud Applications REST services. It provides the ability to influence REST data sources results using industry standard SQL. REST Workshop Updates have been made to the REST workshop. Apart from helping with creating REST services against Oracle database objects, the new REST workshop comes with an added ability to generate Swagger documentation against REST definitions with just a button click. Application Builder Improvements Oracle Apex 18.1 allows developers to create components quickly as wizards are now streamlined with fewer steps and smarter defaults. Usability enhancements have been made to Page Designer. This includes advanced color palette and graphics on page elements as well as to Sticky Filter which improves developers’ productivity. Social Authentication Oracle APEX 18.1 comes with a native authentication scheme and social sign-in. It is possible for developers to create applications in APEX using authentication methods such as Oracle Identity Cloud Service, Facebook, Google, generic OAuth2  and generic OpenID Connect without coding. Charts Oracle JET 4.2 engine is a new feature to the APEX 18.1. It consists of updated charts as well as APIs. It also comes with different types of charts such as Box-Plot, Gantt, and Pyramid. These provide support for multi-series sparse data sets. Mobile UI New component types namely ListView, Reflow Report and Column Toggle have been introduced which can be used for creating Mobile Applications. Improvements have been made to the APEX Universal Theme. These are mobile-focused enhancements which means that page headers and footers in mobiles will be displayed consistently on mobile devices. Also, floating item label templates help in optimizing the presented information on a mobile screen. There is also declarative support offered by Oracle APEX 18.1 for touch-based dynamic actions such as tap, swipe, double tap, press, and pan. Font APEX The new release includes a set of 32 x 32 high-resolution icons that automatically selects the right font size. Accessibility Accessibility mode is deprecated as the latest release will make use of APEX Advisor which consists of a bunch of tests to identify the most occurring accessibility issues. These are the major updates and improvements made in the latest Oracle APEX 18.1. Existing Oracle APEX 18.1 customers just need to install APEX 18.1 version to avail all the latest upgrades. To know more about Oracle APEX 18.1, be sure to check out the official Oracle Apex Blog. Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Firefox 60 arrives with exciting updates for web developers: Quantum CSS engine, new Web APIs and more Will Oracle become a key cloud player, and what will it mean to development & architecture community?  
Read more
  • 0
  • 0
  • 15276

article-image-mozilla-thunderbird-78-will-include-openpgp-support-expected-to-be-released-by-summer-2020
Savia Lobo
09 Oct 2019
3 min read
Save for later

Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020

Savia Lobo
09 Oct 2019
3 min read
Yesterday, the Thunderbird developers announced to implement OpenPGP support in Thunderbird 78, which is planned to be a Summer 2020 release. This means that the support for Thunderbird in Enigmail will be discontinued. Enigmail is a data encryption and decryption extension for Mozilla Thunderbird and SeaMonkey internet suite that provides OpenPGP public key email encryption and signing. Patrick Brunschwig, the lead developer of the Enigmail project, says “this is an inevitable step.” The Mozilla developers have been and still are actively working on removing old code from their codebase. This affects not only Thunderbird but also add-ons. “While it was possible for Thunderbird to keep old "legacy" add-ons alive for a certain time, the time has come for Thunderbird to stop supporting them,” Brunschwig added. Thunderbird is unable to bundle GnuPG software due to incompatible licenses (MPL version 2.0 vs. GPL version 3+). Instead of relying on users to obtain and install external software like GnuPG or GPG4Win, the developers intend to identify and use an alternative, compatible library (Thunderbird 78), and distribute it as part of Thunderbird on all supported platforms. Will OpenPGP support in Thunderbird 78 mark an end to Enigmail? Brunschwig, in an email thread, writes that he “will continue to support and maintain Enigmail for Thunderbird 68 until 6 months after Thunderbird 78 will have been released (i.e. a few months beyond Thunderbird 68 EOL).”  He further mentioned that Enigmail will not run anymore on Thunderbird 72 beta and newer. Thunderbird 78 will no longer support the APIs that Enigmail requires and only allow new "WebExtensions". WebExtensions have a completely different API than classical add-ons, and a much-reduced set of capabilities to the user interface. Enigmail will not end; however, it will continue to maintain and support Enigmail for Postbox, which is running on a different release schedule than Thunderbird for the foreseeable future. “The Thunderbird developers and I have therefore agreed that it's much better to implement OpenPGP support directly in Thunderbird. The set of functionalities will be different than what Enigmail offers, and at least initially likely be less feature-rich. But in my eyes, this is by far outweighed by the fact that OpenPGP will be part of Thunderbird and no add-on and no third-party tool will be required,” Brunschwig writes. To process OpenPGP messages, GnuPG stores secret keys, public keys of correspondents, and trusted information for public keys in its own file format. Thunderbird 78 will not reuse the GnuPG file format, but will rather implement its own storage for keys and trust. Users who already own secret keys from their previous use of Enigmail and GnuPG, and who wish to reuse their existing secret keys, will be required to transfer their keys to Thunderbird 78. On systems that have GnuPG installed, the team may offer assisted importing. Many users are awaiting the summer release next year. https://twitter.com/robertjhansen/status/1181561188301320192 https://twitter.com/glynmoody/status/1181550756916334592 ZDNet writes, “What Mozilla devs will do remains to be seen, and they might end up creating a new OpenPGP library from scratch -- which might take up a lot of Mozilla's resources but will be a win for the open-source community as a whole.” To know more about this news in detail, read Mozilla Wiki. Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Mozilla proposes WebAssembly Interface Types to enable language interoperability
Read more
  • 0
  • 0
  • 15248

article-image-cloudflares-decentralized-vision-of-the-web-interplanetary-file-system-ipfs-gateway-to-create-distributed-websites
Melisha Dsouza
18 Sep 2018
4 min read
Save for later

Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites

Melisha Dsouza
18 Sep 2018
4 min read
The Cloudflare team has introduced Cloudflare’s IPFS Gateway which will make accessing content from the InterPlanetary File System (IPFS) easy and quick without having to install and run any special software on a user’s computer. The gateway which supports new distributed web technologies is hosted at cloudflare-ipfs.com. The team asserts that this will lead to highly-reliable and security-enhanced web applications. A brief gist of IPFS When a user accesses a website from the browser, it tracks down the centralized repository for the website’s content. It then sends a request from the user’s computer to that origin server, and that server sends the content back to the user's computer. However, this centralization mechanism makes it impossible to keep content online if the origin servers rolls back the data. If the origin server faces a downtime or the site owner decides to take down the data, the content stands unavailable. On the other hand, IPFS is a distributed file system that allows users to share files that will be distributed to other computers- throughout the networked file system. This means that a user’s content is stored on all the nodes of the network and data can be safely backed up. Key Differences between IPFS and the traditional Web #1 Free caching and serving of content IPFS provides free caching and serving of content. Anyone can sign up their computer to be a node in the system and start serving data. On the flipside, the traditional web relies on big hosting providers to store content and serve it to the rest of the web. Setting up a website with these providers costs money. #2 Content addressed data Rather than location-addressed data, IPFS focuses on content addressed data. In the traditional web, when a user navigates to a website, it fetches data stored at the websites IP address. The server sends back the relevant information from that IP. With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents. When a user requests for a piece of data in IPFS, they request it by its hash .i.e  content that has a hash value of, for example, QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy Why is Cloudflare’s IPFS Gateway Important? The IPFS increases the resilience of the network. The content with a hash of-QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy could be stored on dozens of nodes. So, if one of the nodes that was storing the content goes down, the network will just look for the content on another node. In addition to resilience, there is an automatic level of security introduced in the system. If the data requested by the user was tampered with during transit, the hash value the user gets will be different than the hash that he/she had asked for. This means that the system has a built-in way of knowing whether or not content has been tampered with. Users can access any of the billions of files stored on IPFS from their browser. Using Cloudflare’s gateway, they can also build a website hosted entirely on IPFS available to users at a custom domain name. Any website connected to IPFS gateway will be provided with a free SSL certificate. IPFS is embracing a new, decentralized vision of the web. Users will be able to create static web sites- containing information that cannot be censored by governments, companies, or other organizations- that are served entirely over IPFS. To know more about this announcement, head over to Cloudflare’s official Blog. 7 reasons to choose GraphQL APIs over REST for building your APIs Laravel 5.7 released with support for email verification, improved console testing Javalin 2.0 RC3 released with major updates!
Read more
  • 0
  • 0
  • 15174
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introducing-web-high-level-shading-language-whlsl-a-graphics-shading-language-for-webgpu
Bhagyashree R
14 Nov 2018
3 min read
Save for later

Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU

Bhagyashree R
14 Nov 2018
3 min read
Yesterday, the W3C GPU for the Web Community Group introduced a new graphics shading language for the WebGPU API called Web High Level Shading Language (WHLSL, pronounced “whistle”). The language extends HLSL to provide better security and safety. Last year, a W3C GPU for the Web Community Group was formed by the engineers from Apple, Mozilla, Microsoft, Google, and others. This group is working towards bringing in a low-level 3D graphics API to the Web called WebGPU. WebGPU, just like other modern 3D graphics API, uses shaders. Shaders are programs that take advantage of the specialized architecture of GPUs. For instance, apps designed for Metal use the Metal Shading Language, apps designed for Direct3D 12 use HLSL, and apps designed for Vulkan use SPIR-V or GLSL. That’s why the WebKit team introduced WHLSL for the WebGPU API. Here are some of the requirements WHLSL aims to fulfill: Need for a safe shader language Irrespective of what an application does, the shader should only be allowed to read or write data from the Web page’s domain. Without this safety insurance, malicious websites can run a shader that reads pixels out of other parts of the screen, even from native apps. Well-specified language To ensure interoperability between browsers, a shading language for the Web must be precisely specified. Also, often rendering teams write shaders in their own custom in-house language, and are later cross-compiled to whichever language is necessary. That is why the shader language should have a reasonably small set of unambiguous grammar and type checking rules that compiler writers can reference when emitting this language. Translatable to other languages As WebGPU is designed to work on top of Metal, Direct3D 12, and Vulkan, the shader should be translatable to Metal Shading Language, HLSL (or DXIL), and SPIR-V. There should be a provision to represent the shaders in a form that is acceptable to APIs other than WebGPU. Performant language To provide an overall improved performance the compiler needs to run quickly and programs produced by the compiler need to run efficiently on real GPUs. Easy to read and write The shader language should be easy to read and write for a developer. It should be familiar to both GPU and CPU programmers. GPU programmers are important clients as they have experience in writing shaders. As GPUs are now popularly being used in various fields other than rendering including machine learning, computer vision, and neural networks, the CPU programmers are also important clients. To learn more in detail about WLHSL, check out WebKit’s post. Working with shaders in C++ to create 3D games Torch AR, a 3D design platform for prototyping mobile AR Bokeh 1.0 released with a new scatter, patches with holes, and testing improvements
Read more
  • 0
  • 0
  • 15164

article-image-debian-10-codenamed-buster-released-along-with-debian-gnu-hurd-2019-as-a-port
Vincy Davis
08 Jul 2019
4 min read
Save for later

Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port

Vincy Davis
08 Jul 2019
4 min read
Two days ago, the team behind Debian announced the release of Debian stable version 10 (codename - ‘buster’), which will be supported for the next 5 years. Debian 10 will use the Wayland display server by default, includes over 91% of source reproducible projects, and ships with several desktop applications and environments.  Yesterday, Debian also released the GNU/Hurd 2019, which is a port release. It is currently available for the i386 architecture with about 80% of the Debian archive. What's new in Debian 10 Wayland display server In this release, GNOME will use the Wayland display server by default, instead of Xorg. Wayland’s simple and modern design provides advantages in terms of security. The Xorg display server is installed in Debian 10, by default. Users can use the default display manager to change the display server in their session. Reproducible Builds project In Debian 10, the Reproducible Builds project plans to have over 91% of the source packages  built in bit-for-bit identical binary packages. This will work as an important verification feature for users as it will protect them against malicious attempts to tamper with compilers and build networks.  Desktop applications Debian 10 “buster” ships with several desktop applications and environments. Some of the desktop environments include: Cinnamon 3.8 GNOME 3.30 KDE Plasma 5.14 LXDE 0.99.2 Other highlights in Debian 10 AppArmor, a mandatory access control framework for restricting programs' capabilities, is installed and enabled by default for security-sensitive environments. All methods provided by Advanced Package Tool (APT) (except cdrom, gpgv, and rsh) can optionally make use of seccomp-BPF sandboxing. The https method for APT is included in the apt package and does not need to be installed separately.  Network filtering, based on the nftables framework is set by default. Starting with iptables v1.8.2, the binary package includes two variants of the iptables command line interface: iptables-nft and iptables-legacy. The UEFI (Unified Extensible Firmware Interface), which is a specification for a software program that connects a computer's firmware to its operating system, introduced in Debian 7, has been greatly improved in Debian 10.  The Secure Boot support is included in this release for amd64, i386 and arm64 architectures and will work on most Secure Boot-enabled machines. This means that users will not have to disable the Secure Boot support in the firmware configuration. The cups and cups-filters packages installed by default in Debian 10, allows users to take advantage of driverless printing.  This release includes numerous updated software packages such as Apache 2.4.38, BIND DNS Server 9.11, Chromium 73.0, Emacs 26.1, Firefox 60.7 and more.  Visit the Debian official website, for more details on Debian 10. What’s new in Debian GNU/Hurd 2019 An Advanced Configuration and Power Interface Specification (ACPI) translator has been made available, it is currently only used to shut down the system.  The LwIP TCP/IP stack, which is a widely used open-source TCP/IP stack designed for embedded systems, is now available as an option.  A Peripheral Component Interconnect (PCI) arbiter has been introduced and will be useful to properly manage PCI access, as well as to provide fine-grain hardware access.   New optimizations now include protected payloads, better paging management and message dispatch, and gsync synchronization.  Support for LLVM has also been introduced.  Besides the Debian installer, a pre-installed disk image is also available for installing ISO images.  The general reaction to both the Debian news has been positive with users praising Debian for always staying up to date with the latest features. A Redditor says, “Through the years I've seen many a "popular" distro come and go, yet Debian remains.” Another user on Hacker News adds, “I left Redhat at 8.0(long time ago, before Fedora) and started using debian/ubuntu and never looked back, in my opinion, while Redhat made a fortune by its business model, Debian and ubuntu are the true community OS, I can't ask for more. Debian has been my primary Server for the last 15 years, life is good with them. Thank you so much to the maintainers and contributors for putting so much effort into them.” Read the Debian mailing list, for more information on Debian GNU/Hurd. Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now?
Read more
  • 0
  • 0
  • 15074

article-image-mozilla-introduces-iodide-a-tool-for-data-scientists-to-create-interactive-documents-using-web-technologies
Bhagyashree R
14 Mar 2019
3 min read
Save for later

Mozilla introduces Iodide, a tool for data scientists to create interactive documents using web technologies

Bhagyashree R
14 Mar 2019
3 min read
On Tuesday, Brendan Colloran, a data scientist at Mozilla, introduced an experimental tool called Iodide. This tool allows data scientists to create interactive documents using web technologies. As the tool is currently in alpha stage, it is not recommended to use it for critical work. Why Iodide is needed? Data scientists not only need to write code and analyze data, as a part of their job, they also have to share results and insights with the decision making teams. While they have a wide range of tools for analyzing the data like Jupyter Notebook and RStudio, there are a very few options for sharing the results in an effective way using web technologies. Often times, data scientists just copy the key figures and summary statistics to a Google Doc. Iodide aims to eliminate the round trips between exploring data in code and creating an understandable report. It also aims to make collaboration among data scientist very convenient. When a data scientist is reading another’s final report and wants to look at the code behind it, he can easily do so. How does Iodide work? Iodide provides a “explore view”, which consists of a set of panes. These include an editor for writing code, a console for viewing the output from code, and a workspace viewer for examining the variables you’ve created. In addition to these, it also has a “report preview” pane which shows the preview of your report. Source: Mozilla When you click on the REPORT button, the contents of your report preview will expand to fill the entire window. This is very useful for the readers who are not interested in the technical details as it will hide the code. Source: Mozilla Once the report is ready, users can send a link directly to their colleagues and collaborators. This will give them access to the clean and readable document as well as the underlying code and the editing environment. So, in case, they want to review your code, they can switch to the “explore mode”. If they want to use your code for their own work, they can fork it and start working on their own version, similar to the GitHub fork option. To know more in detail, check the blog post shared by Mozilla. Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla shares key takeaways from the Design Tools survey
Read more
  • 0
  • 0
  • 15002

article-image-mozilla-optimizes-calls-between-javascript-and-webassembly-in-firefox-making-it-almost-as-fast-as-js-to-js-calls
Bhagyashree R
09 Oct 2018
4 min read
Save for later

Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls

Bhagyashree R
09 Oct 2018
4 min read
Yesterday, Mozilla announced that in the latest version of Firefox Beta, calls between JS and WebAssembly are faster than non-inlined JS to JS function calls. They have made these optimizations keeping two aspects of engine’s work in mind: reducing bookkeeping and cutting out intermediaries. How they made WebAssembly function calls faster They have optimized the calls in both directions, that is, from JavaScript to WebAssembly and WebAssembly to JavaScript with their recent work in Firefox. All these optimizations have been done to make the engine’s work easier. The improvements fall into two groups: Reducing bookkeeping: This means getting rid of unnecessary work to organize stack frames Cutting out intermediaries: This means taking the most direct path between functions How they optimized WebAssembly to JavaScript calls The browser engine has to deal with two different kinds of languages while going through your code even if the code is all written in JavaScript: bytecode and machine code. The engine needs to be able to go back and forth between these two languages. When it does these jumps, it needs to have some information in place, like the place from where it needs to resume. The engine also must separate the frames that it needs. To organize its work, the engine gets a folder and puts this information in it. When the Firefox developers first added WebAssembly support, they had a different type of folder for it. So even though JIT-ed JavaScript code and WebAssembly code were both compiled and speaking machine language, it was treated as if they were speaking different languages. This was unnecessarily costly in two ways: An unnecessary folder is created which adds up setup and teardown costs It requires trampolining through C++ to create the folder and do other setup They fixed this by generalizing the code to use the same folder for both JIT-ed JavaScript and WebAssembly. This made calls from WebAssembly to JS almost as fast as JS to JS calls. How they optimized JavaScript to WebAssembly calls JavaScript and WebAssembly use different customs even if they are speaking the same language. For instance, to handle dynamic types, JavaScript uses something called boxing. As JavaScript doesn’t have explicit types, they need to be figured out at runtime. To keep track of the types of values, the engine attaches a tag to the value. This turns one simple operation into four operations. This is the reason why WebAssembly expects parameters to be unboxed and doesn’t box its return values. Since it is statically typed, it doesn’t need to add this overhead. So, before the engine gives the parameters to the WebAssembly function, the engine needs to unbox the values and put them in registers. It has to go through C++ again to prepare the values when going from JS to WebAssembly. Going to this intermediary step is a huge cost, especially for something that’s not that complicated. To solve this, they took the code that C++ was running and made it directly callable from JIT code. So, when the engine goes from JavaScript to WebAssembly, the entry stub unboxes the values and places them in the right place. Along with these calls, they have also optimized monomorphic and built-in calls. To understand the optimizations well, check out Lin Clark’s official announcement on Mozilla’s website. Mozilla updates Firefox Focus for mobile with new features, revamped design, and Geckoview for Android Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns
Read more
  • 0
  • 0
  • 14991
article-image-flutter-gets-new-set-of-lint-rules-to-build-better-chrome-os-apps
Sugandha Lahoti
13 May 2019
2 min read
Save for later

Flutter gets new set of lint rules to build better Chrome OS apps

Sugandha Lahoti
13 May 2019
2 min read
Last week at the Google I/O, Flutter UI framework expanded from mobile to multi-platform and the company released the first technical preview of Flutter for web. On Friday, Google announced new updates to Flutter for building Chrome OS applications. Flutter tools allow developers to build and test their apps directly on the Chrome OS. New updates for Flutter for Chrome OS Along with Flutter’s seamless resizing feature, Flutter for Chrome OS comes with additional features such as scroll wheel support, hover management, and better keyboard event support. The Flutter team also added a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines. This will help developers get a better idea of whether their Android app is going to run well on Chrome OS. In the IDE or when running flutter analyze at the command line, developers can see lints if their Flutter app has issues targeting Chrome OS. Image source: GitHub Lint rules can be turned on the Flutter app by creating a file named analysis_options.yaml in the root of your Flutter project. The contents should look similar to this: include: package:flutter/analysis_options_user.yaml analyzer: optional-checks:   chrome-os-manifest-checks Developing Flutter on ChromeOS has got the developer community excited. https://twitter.com/mklin/status/1127001767873409025 https://twitter.com/timsneath/status/1126921052922081280 https://twitter.com/lehtimaeki/status/1103602179556937729 If you’d like to target Flutter for Chrome OS, you can do so today simply by installing the latest version of Flutter. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q and more.
Read more
  • 0
  • 0
  • 14990

article-image-mozillas-updated-policies-will-ban-extensions-with-obfuscated-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Mozilla’s updated policies will ban extensions with obfuscated code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Mozilla announced that according to its updated policies, extensions with obfuscated code will not be accepted on its add-ons platform. It is also becoming much stricter regarding blocking extensions that fail to abide by its policies. These policies will come into effect from June 2019. Last year in October, Google also announced a similar policy, which came into effect with the start of this year, to prevent malicious extensions from reaching its extensions store. If you do not know what obfuscated code means, it is basically writing code that is difficult for a human to understand. Common practices of writing obfuscated code include replacing function or variable names with weird but allowed characters, using reversed array indexing, using look-alike characters, etc. “Generally speaking, just try to find good coding guidelines and to try to violate them all,” said a developer on Stack Overflow. However, obfuscated code should not be confused with minified, concatenated, or otherwise machine-generated code, which are acceptable. Minification refers to the act of removing all unnecessary or redundant data that do not have any effect on the output, such as whitespaces, code comments, or shortening variable names, and so on. “We will no longer accept extensions that contain obfuscated code. We will continue to allow minified, concatenated, or otherwise machine-generated code as long as the source code is included. If your extension is using obfuscated code, it is essential to submit a new version by June 10th that removes it to avoid having it rejected or blocked,” Caitlin Neiman said in a blog post. If your code contains transpiled, minified or otherwise machine-generated code, you are required to submit a copy of human-understandable source code and also instructions on how to reproduce that build. Here is a snippet from Mozilla’s policies: “Add-ons are not allowed to contain obfuscated code, nor code that hides the purpose of the functionality involved. If external resources are used in combination with add-on code, the functionality of the code must not be obscured. To the contrary, minification of code with the intent to reduce file size is permitted.” Mozilla also plans to take stricter steps for those extensions that are found to violate its policies. Neiman said, “We will be blocking extensions more proactively if they are found to be in violation of our policies. We will be casting a wider net, and will err on the side of user security when determining whether or not to block.” If users are already using the extensions which have obfuscated, once the policies are employed, these extensions will be disabled. Many developers are supporting this decision. One Redditor commented, “This is great, obfuscated code doesn't really belong anywhere in the frontend, since you have access to the code and can figure out what the program does given enough time, so why not just make it readable.” Read the announcement on Mozilla blog and to go through the policies visit MDN web docs. Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs  
Read more
  • 0
  • 0
  • 14977

article-image-basecamp-3-faces-a-read-only-outage-of-nearly-5-hours
Bhagyashree R
13 Nov 2018
3 min read
Save for later

Basecamp 3 faces a read-only outage of nearly 5 hours

Bhagyashree R
13 Nov 2018
3 min read
Yesterday, Basecamp shared the cause behind the outage Basecamp 3 faced on November 8. The outage continued for nearly five hours starting from 7:21 am CST to 12:11 pm. Due to this, the users were only able to access existing messages, to-do lists, and files, but they were prevented from entering any new information and altering any existing information. David Heinemeier Hansson, the creator of Ruby on Rails, founder & CTO at Basecamp said in his post that this was the worst outage Basecamp has faced in probably 10 years: “It’s bad enough that we had the worst outage at Basecamp in probably 10 years, but to know that it was avoidable is hard to swallow. And I cannot express my apologies clearly or deeply enough.” https://twitter.com/basecamp/status/1060554610241224705 Key causes behind the Basecamp 3 outage Every activity that a user does is tracked in Basecamp’s events table, whether it is posting a message, updating a to-do list, or applauding a comment. The root cause behind the Basecamp going into read-only mode was its database hitting the ceiling of 2,147,483,647 on this very busy events table. Secondly, the programming framework that Basecamp uses, Ruby on Rails updated their default for database tables in version 5.1 released in 2017. This update lifted the headroom for records from 2,147,483,647 to 9,223,372,036,854,775,807 on all tables. But, the column in the database was configured as an integer rather than a big integer. The complete timeline of the outage Time Activity 7:21 am CST   They ran out of ID numbers on the events table in the database because the column in the database was configured as an integer rather than a big integer. The integer runs out of numbers at 2147483647 and big integer can grow until 9223372036854775807. 7:29 am CST The team started working on database migration where they updated the column type from the regular integer to the big integer type. They later tested this fix on a staging database to make sure it was safe. 7:52 am CST The test done on the staging database verified that the fix was correct, so they moved on to make the changes to the production database table. Due to the huge size of the production database, the migration was estimated to take about one hour and forty minutes. 10:56 am CST-11:52 am CST The upgrade to the database was completed, but still, verification of all the data, and configurations update was required to ensure no other problems are faced when it is back online. 12:22 pm CST After the successful verification, Basecamp came back online. 12:33 pm CST Basecamp went down again because of the intense load of the application was back online, which caused the caching server to get overwhelmed. 12:41 pm CST Basecamp came back online after they switched over to the backup caching servers. To read the entire update on Basecamp’s outage, check out David Heinemeier Hansson’s post on Medium. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Azure DevOps outage root cause analysis starring greedy threads and rogue scale units
Read more
  • 0
  • 0
  • 14862
article-image-chrome-72-beta-releases-with-public-class-fields-user-activation-and-more
Bhagyashree R
19 Dec 2018
2 min read
Save for later

Chrome 72 Beta releases with public class fields, user activation, and more

Bhagyashree R
19 Dec 2018
2 min read
Google yesterday released Chrome 72 Beta for Android, Chrome OS, Linux, macOS, and Windows. This version comes with support for public class fields, a user activation query API, and more. Public class fields You can now declare public class fields in scripts, which can be either initialized or uninitialized. To implement public class fields, you need to declare them inside a class declaration but outside of any member functions. Support for private class fields will be added in the future releases. User activation query API Chrome 72 Beta comes with user activation query API, using which you can check whether there has been a user activation. This is introduced to avoid annoying web page behaviors such as autoplay. Additionally, it enables embedded iframes to examine postMessage() calls to determine whether they occurred within the context of a user activation. Well-formed JSON.stringify Previously, JSON.stringify used to return ill-formed Unicode strings if the input had any lone surrogates. To solve this, well-formed JSON.stringify outputs escape sequences for lone surrogates, making its output valid Unicode and representable in UTF-8. What are the modules removed? Popups during page unload is not allowed: Pages will not use window.open() to open a new page during unloading anymore. HTTP-Based Public Key Pinning (HPKP) is removed: HPKP was introduced to allow websites to send an HTTP header that pins one or more of the public keys present in the site's certificate chain. But it has seen very low adoption and can also create risks of denial of service and hostile pinning. Rendering FTP resources deprecated: Rendering resources from FTP servers is now not allowed. Directory listings will still be generated, but any non-directory listing will be downloaded rather than rendered in the browser. Along with these updates, TLS 1.0 and TLS 1.1 are deprecated and removal is expected in Chrome 81. Read the detailed list of updates on Chromium’s blog. Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018
Read more
  • 0
  • 0
  • 14831

article-image-google-announces-two-new-attribute-links-sponsored-and-ugc-and-updates-nofollow
Amrata Joshi
16 Sep 2019
5 min read
Save for later

Google announces two new attribute links, Sponsored and UGC and updates “nofollow”

Amrata Joshi
16 Sep 2019
5 min read
Last year, the team at Google announced two new link attributes that provide webmasters with additional ways to Google Search the nature of particular links. The team is also evolving the nofollow attribute to identify the nature of links. How are the new attribute links useful? rel="sponsored" The sponsored attribute is used to identify links on the site that were created as part of sponsorships, advertisements, or other compensation agreements. rel="ugc" The UGC (User Generated Content) attribute value is used for the links within user-generated content, such as forum posts and comments. rel="nofollow" Bloggers usually try to improve their websites' search engine rankings by posting comments like "Visit my discount pharmaceuticals site” on other blogs, these are known as comment spam. Google took steps to solve this issue of comment spam by introducing the nofollow attribute in 2005 for flagging advertising-related or sponsored links. So when Google sees the attribute (rel="nofollow") on hyperlinks, it doesn’t give any credits to them that is used for ranking websites in the search results. This attribute was introduced so that spammers don’t get any benefit from abusing public areas like blog comments, referrer lists, trackbacks, etc. The nofollow attribute was originally used for combatting blog comment spam. It has now been evolved and used for combatting advertising links and user-generated links that aren’t reliable. It is now also used for cases where webmasters want to link to a page but they don’t want to imply any type of endorsement. The nofollow link attribute will be used as a hint for crawling and indexing purposes by March 1, 2020.  Web analysis will be easier with these attributes All of the above attributes will help in processing the links for better analysis of the web. As they are now treated as hints that can be used to identify which links need to be considered and which ones need to be excluded within Search. It is important to identify the links as they contain valuable information that can be used to improve search and can help in understanding as to how the words within these links describe the content they point at. These links can also be used to understand the unnatural linking patterns. The official post reads, “The link attributes of “ugc” and “nofollow” will continue to be a further deterrent. In most cases, the move to a hint model won’t change the nature of how we treat such links. We’ll generally treat them as we did with nofollow before and not consider them for ranking purposes. We will still continue to carefully assess how to use links within Search, just as we always have and as we’ve had to do for situations where no attributions were provided.” How will this affect publishers and SEO experts? The links that were arbitrarily nofollowed might now get counted as per the new update so it might encourage the spammers and hence an increase in link spam. Also, if these nofollowed links get counted, a lot of sites would simply start implementing a nofollow link policy and Google might count those links and that would impact the rankings. For instance, if a website uses a lot of Wikipedia links and if Google counts them, its ranking might improve. SEO experts will now have to look into what link attributes need to be applied to a specific link and work on their strategies and CMS (Content Management Systems) based on the new change. https://twitter.com/AlanBleiweiss/status/1171475313114533891?s=20 Most of the users on HackerNews seems to be sceptical about these new link attributes, according to them it won’t benefit them. A user commented on HackerNews, “I run large forums and mark my links "nofollow". I see no reason or benefit to me to change to or add "ugc". It's not clear that there's any benefits for me. And it's vague enough that I don't know that there are not downsides. Seems best to do nothing.” Few others think that the puporse of nofollow attribute has changed. Another user commented, “This means the meaning of 'nofollow' is changing? That seems a horrible idea. Previously 'nofollow' meant exactly that - "don't follow this link please googlebot", now it will mean "follow this link, but don't grant my site ranking onto the destination." - Thats a VERY different use case, I can't see all the millions of existing 'nofollow' tags being changed by site owners to any of these new tags. Surely a 'nogrant' or somesuch would be a better option, and leave 'nofollow' alone.” Danny Sullivan, Google’s SearchLiaison, responded to the criticism around the newly updated nofollow attribute: https://twitter.com/dannysullivan/status/1171488611918696449 To know more about this news, check out the official post. Other interesting news in web development GitHub updates to Rails 6.0 with an incremental approach 5 pitfalls of React Hooks you should avoid – Kent C. Dodds The Tor Project on browser fingerprinting and how it is taking a stand against it
Read more
  • 0
  • 0
  • 14788
Modal Close icon
Modal Close icon