Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-introducing-azure-sphere-a-secure-way-of-running-your-internet-of-things-devices
Gebin George
02 May 2018
2 min read
Save for later

Introducing Azure Sphere - A secure way of running your Internet of Things devices

Gebin George
02 May 2018
2 min read
Infrastructure made of connected things is highly trending as organizations are in the process of adopting Internet of Things. At the same time security concerns around these connected devices continues to be a bottleneck for IoT adoption. In an effort to improve IoT security, earlier this month, Microsoft released Azure Sphere, a cost-effective way of securing connected devices. Gartner claims that worldwide spending on IoT security will reach 1.5 billion in 2018. Azure Sphere is basically a suite of services, used to enhance IoT security. Following are the services included in the suite: Azure Sphere MCUs These are a certified class of microcontrollers specially designed for security of internet of things. It follows a cross-over mechanism which allows the combination of running realt-time and application processors with built-in microsoft security mechanism and connectivity. MCU chips are designed using custom silicon security technology, made by Microsoft. Some of the highlights are: A pluton security subsystem to execute complex cryptographic operations A cross-over MCU with the combination of both Cortex-A and Cortext M class processor. Build-in network connectivity to ensure devices are upto date Azure Sphere OS Azure Sphere OS is nothing but a Linux distro used to securely run the internet of things. This highly scalable and secure operating system can be used to run the specialized MCUs by adding an extra layer of security. Some of the highlights are: Secured application containers focussing on agility and robustness A custom Linux Kernel enabling silicon diversity and innovation A security monitor to manage access and integrity The Azure Sphere Security Service An end-to-end security service solely dedicated to secure Azure sphere devices, enhancing security, identifying threats, and managing trust between cloud and device endpoints. Following are the highlights: Protects your devices using certificate based-authentication system. Ensure devices authenticity by ensuring that they are running on genuine software Managing automated updates for Azure Sphere OS, for threat and incident response Easy deployment of software updates to Azure Sphere connected devices. For more information, refer the official Microsoft blog. Serverless computing wars: AWS Lambdas vs Azure Functions How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 13217

article-image-google-to-acquire-cloud-data-migration-start-up-alooma
Melisha Dsouza
20 Feb 2019
2 min read
Save for later

Google to acquire cloud data migration start-up ‘Alooma’

Melisha Dsouza
20 Feb 2019
2 min read
On Tuesday, Google announced its plans to acquire cloud migration company Alooma, which helps other companies move their data from multiple sources into a single data warehouse. Alooma not only provides services to help with migrating to the cloud but also helps in cleaning up this data and then using it for Artificial Intelligence and machine learning use cases. Google Cloud’s blog states that “ The addition of Alooma, subject to closing conditions, is a natural fit that allows us to offer customers a streamlined, automated migration experience to Google Cloud, and give them access to our full range of database services, from managed open source database offerings to solutions like Cloud Spanner and Cloud Bigtable” The financial details of the deal haven't been released yet. In early 2016, Alooma raised about $15 million, including an $11.2 million Series A round led by Lightspeed Venture Partners and Sequoia Capital. Aloomas’ blog states that “Joining Google Cloud will bring us one step closer to delivering a full self-service database migration experience bolstered by the power of their cloud technology, including analytics, security, AI, and machine learning” In a statement to TechCrunch, Google says “Regarding supporting competitors, yes, the existing Alooma product will continue to support other cloud providers. We will only be accepting new customers that are migrating data to Google Cloud Platform, but existing customers will continue to have access to other cloud providers.” This means that, after the deal is closed, Alooma will not accept any new customers who want to migrate data to any competitors--for instance, Amazon’s Azure. Those who use Alooma in combination with AWS, Azure and other non-Google services will likely start looking for other solutions. Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 13137

article-image-cortex-an-open-source-horizontally-scalable-multi-tenant-prometheus-as-a-service-becomes-a-cncf-sandbox-project
Bhagyashree R
21 Sep 2018
3 min read
Save for later

Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project

Bhagyashree R
21 Sep 2018
3 min read
Yesterday, Cloud Native Computing Foundation (CNCF) accepted Cortex as a CNCF Sandbox project. Cortex is an open source, horizontally scalable, multi-tenant Prometheus-as-a-service. It provides long-term storage for Prometheus metrics when used as a remote write destination. It also comes with a horizontally scalable, Prometheus-compatible query API. It provides uses cases for: Service providers to enable them to manage a large number of Prometheus instances and provide long-term storage. Enterprises to centralize management of large-scale Prometheus deployments and ensure long-term durability of Prometheus data. Originally developed by Weaveworks, it is now being used in production by organizations like Grafana Labs, FreshTracks, and EA. How does it work? The following diagram shows its architecture: Source: CNCF 1. Scraping samples: First, a Prometheus instance scraps all of the users’ services and then forwards them to a Cortex deployment. It does this using the remote_write API, which was added to Prometheus to support Cortex and other integrations. 2. Distributor distributes the samples: The instance then sends all these samples to distributor, which is a stateless service that consults the ring to figure out which ingesters should ingest the sample. The ingesters are arranged using a consistent hash ring, keyed on the fingerprint of the time series, and stored in a consistent data store, such as Consul. Distributor finds the owner ingester and forwards the sample to it and also to two ingesters after it in the ring. This means if an ingester goes down, we have two others that have its data. 3. Ingesters make chunks of samples: Ingesters continuously receive a stream of samples and group them together in chunks. These chunks are then stored in a backend database, such as DynamoDB, BigTable, or Cassandra. Ingesters facilitate this chunking process so that Cortex isn’t constantly writing to its backend database. Alexis Richardson, CEO of Weaveworks believes that being a CNCF Sandbox project will help grow the Prometheus ecosystem: “By joining CNCF, Cortex will have a neutral home for collaboration between contributor companies, while allowing the Prometheus ecosystem to grow a more robust set of integrations and solutions. Cortex already has a strong affinity with several CNCF technologies, including Kubernetes, gRPC, OpenTracing and Jaeger, so it’s a natural fit for us to continue building on these interoperabilities as part of CNCF.” To know more in detail, check out the official announcement by CNCF and also read What is Cortex?, a blog post published on Weaveworks Blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 13091

article-image-how-dropbox-uses-automated-data-center-operations-to-reduce-server-outage-and-downtime
Melisha Dsouza
17 Jan 2019
3 min read
Save for later

How Dropbox uses automated data center operations to reduce server outage and downtime

Melisha Dsouza
17 Jan 2019
3 min read
Today, in a blog post, Dropbox explained how the Prilo system used by the team has automated most of the processes of the company, that were previously manually attended to by Dropbox personnel. Pirlo is used by Dropbox in two main areas- validate and configure network switches and ensure the reliability of servers before entering production. This has, in turn, helped Dropbox to safely manage their physical infrastructure operations with ease. Pirlo consists of a distributed MySQL-backed job queue built by Dropbox itself, using primitives like gRPC, service discovery, and our managed MySQL clusters. Switch provisioning at Dropbox is handled by the TOR STarter which is a Pirlo component. The TOR Starter validates and configures switches in Dropbox datacenter server racks, PoP server racks, and at the different layers of the data center fabric; responsible to connect racks in the same facility together. Server provisioning and repair validation is handled by Pirlo Server Validation. All new servers arriving at the company are validated using this component. Repaired servers are also validated before they are transitioned back into production. Pirlo has automated these manual processes at Dropbox and has led to a reduction in downtime, outages, and inefficiencies associated with the incomplete or erroneous fixing of the systems. By reducing manual work, employees can now focus their attention to more value adding jobs. Before using Pirlo, the above tasks had to be performed by operations engineers and subject matter experts who used various server error logs to take appropriate actions to fix failed hardware. After applying the remediation actions, the engineer would send the machine back into production by sending the server to Dropbox re-imaging system. If the remediation actions didn’t fix the system or properly prepare it for re-imaging, the server would be sent back to the operations engineer for additional fixing. This would end up consuming a lot of the operation engineer's time as well as company resources. Operating engineers who used Pirlo system steadily increased their output by 40+%. The automation of manual tasks allowed engineers to address more issues in the same amount of time. You can head over to Dropbox’s official blog to explore the workings of Pirlo and how it benefited the organization. How to navigate files in a Vue app using the Dropbox API Tech jobs dominate LinkedIn’s most promising jobs in 2019 NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!
Read more
  • 0
  • 0
  • 12991

article-image-aws-introduces-aws-datasync-for-automated-simplified-and-accelerated-data-transfer
Natasha Mathur
27 Nov 2018
3 min read
Save for later

AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer 

Natasha Mathur
27 Nov 2018
3 min read
The AWS team introduced AWS DataSync, an online data transfer service for automating data movement, yesterday. AWS DataSync offers data transfer from on-premises storage to Amazon S3 or Amazon Elastic File System (Amazon EFS) and vice versa. Let’s have a look at what’s new in AWS DataSync. Key Functionalities Move data 10x faster: AWS DataSync uses a purpose-built data transfer protocol along with a parallel, multi-threaded architecture that has the capability to run 10 times as fast as open source data transfer. This also speeds up the migration process and the recurring data processing workflows for analytics, machine learning, and data protection processes. Per-gigabyte fee: It is a managed service and you only need to pay the per-gigabyte fee which is paying only for the amount of data that you transfer. Other than that, there are no upfront costs and no minimum fees. DataSync Agent: The ‘AWS DataSync Agent’ is a crucial part of the service. It helps connect to your existing storage and the in-cloud service to automate, scale, and validate transfers. This, in turn, ensures that you don't have to write scripts, or modify the applications. Easy setup: It is very easy to set up and use (Console and CLI access is available). All you need to do is deploy the DataSync agent on-premises, then connect it to your file systems using the Network File System (NFS) protocol. After this, select Amazon EFS or S3 as your AWS storage, and you can start moving the data. Secure data transfer: AWS DataSync offers secure data transfer over the Internet or AWS Direct Connect. It also comes with automatic encryption and data. This, in turn, minimizes the in-house development and management which is needed for fast and secure transfers. Simplify and automate data transfer: With the help of AWS DataSync, you can perform one-time data migrations, transfer the on-premises data for timely in-cloud analysis, and automate the replication to AWS to ensure data protection and recovery. AWS DataSync is available for use from now in the US East, US West, Europe and Asia Pacific Regions. For more information, check out the official AWS DataSync blog post.  Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018  Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! 
Read more
  • 0
  • 0
  • 12894

article-image-google-cloud-collaborates-with-unity-3d-a-connected-gaming-experience-is-here
Savia Lobo
20 Jun 2018
2 min read
Save for later

Google Cloud collaborates with Unity 3D; a connected gaming experience is here!

Savia Lobo
20 Jun 2018
2 min read
Google Cloud announced its recent alliance with Unity at the Unite Berlin conference this week. Unity is a popular game development platform for a real-time 3D game and content creation. Google Cloud stated that they are building a suite of managed services and tools for creating connected games. This suite will be much focussed on real-time multiplayer experiences. With this Google Cloud becomes the default cloud provider helping developers build connected games using Unity. It will also assist them to easily build and scale their games. Additionally, developers will get an advantage of Google Cloud right from the Unity development environment without needing to become cloud experts. The reason Google Cloud collaborates with Unity is to create an open source for connecting players in multiplayer games. This project mainly aims at creating an open source, community-driven solutions built in collaboration with the world’s leading game companies. Unity will also be migrating all of the core infrastructure powering its services and offerings to Google Cloud. Unity will also be running its business on the same cloud that Unity game developers will develop, test and globally launch their games. John Riccitiello, Chief Executive Officer, Unity Technologies, said, “Migrating our infrastructure to Google Cloud was a decision based on the company’s impressive global reach and product quality. Now, Unity developers will be able to take advantage of the unparalleled capabilities to support their cloud needs on a global scale.” Google Cloud plans to release new products and features over the coming months. Keep yourself updated on this alliance by checking out Unity’s homepage. AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 12823
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-the-ceph-foundation-has-been-launched-by-the-linux-foundation-to-support-the-open-source-storage-project
Melisha Dsouza
13 Nov 2018
3 min read
Save for later

The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project

Melisha Dsouza
13 Nov 2018
3 min read
At Ceph Day Berlin, yesterday (November 12)  the Linux Foundation announced the launch of the Ceph Foundation. A total of 31 organizations have come together to launch the Ceph Foundation including industries like ARM, Intel, Harvard and many more. The foundation aims to bring industry members together to support the Ceph open source community. What is Ceph? Ceph is an open source distributed storage technology that provides storage services for many of the world’s largest container and OpenStack deployments. The range of organizations using Ceph is vast. They include financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, car manufacturers like BMW, and software firms like SAP and Salesforce. The main aim of the Ceph Foundation The main focus of the foundation is to raise money via annual membership fees from industry members. The combined pool of funds will then be spent in support of the Ceph community. The team has already raised around half a million dollars for their first year which will be used to support the Ceph project infrastructure, cloud infrastructure services, internships, and community events. The new foundation will provide a forum for community members and industry stakeholders to meet and discuss project status, development and promotional activities, community events, and strategic direction. The Ceph Foundation replaces the Ceph Advisory Board formed back in 2015. According to a Linux Foundation statement, the Ceph Foundation, will “organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit” Ceph has an ambitious plan for new initiatives once the foundation gets properly functional. Some of these include: Expansion of and improvements to the hardware lab used to develop and test Ceph An events team to help plan various programs and targeted regional or local events Investment in strategic integrations with other projects and ecosystems Programs around interoperability between Ceph-based products and services Internships, training materials, and much more! The Ceph Foundation will provide an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. You can head over to their blog to know more about this news. Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’ Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 12760

article-image-verizon-chooses-amazon-web-servicesaws-as-its-preferred-cloud-provider
Savia Lobo
18 May 2018
2 min read
Save for later

Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider

Savia Lobo
18 May 2018
2 min read
Verizon Communications Inc. recently announced that it is migrating about 1000 of its business-critical applications and database back-end systems to the popular cloud provider, Amazon Web Services (AWS). Verizon had bought Terramark, a cloud and service provider, in 2011 as part of its public and private cloud strategy. This strategy included building its own cloud that offered infrastructure-as-a-service to its customers. AWS has stayed ahead of competition, where it offered added services to its customers. On the other hand, Verizon could not stay in the race for longer as it was usurped by Microsoft and Google. Due to this, two years ago, in 2016, Verizon closed down its public cloud offering and then sold off its cloud and managed hosting service assets to IBM and also sold a number of data centres to Equinix. Verizon had first started working with AWS in 2015 and has many business and consumer applications already running in the cloud. The current migrations to AWS is part of Verizon’s corporate-wide initiative, which is, to increase agility and reduce costs through the use of cloud computing. Some benefits of this migration include: With the help of AWS, Verizon will enable it to access more comprehensive set of cloud capabilities. This will ensure that its developers are able to invent on behalf of its customers. Verizon has built AWS-specific training facilities where its employees can quickly update themselves on the AWS technologies and learn how to innovate with speed and at scale. AWS enables Verizon to quickly deliver the best, most efficient customer experiences. Verizon also aims to make the public cloud a core part of its digital transformation, upgrading its database management approach to replace its proprietary solutions with Amazon Aurora To know more about AWS and Verizon’s partnership, read the AWS blog post. Linux Foundation launches the Acumos Al Project to make AI accessible Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 12744

article-image-zefflin-systems-unveils-servicenow-plugin-for-red-hat-ansible-2-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0

Savia Lobo
29 Jun 2018
2 min read
Zefflin Systems announced its ServiceNow Plugin 2.0 for the Red Hat Ansible 2.0. The plugin helps IT operations easily map IT services to infrastructure for automatically deployed environment. Zefflin's Plugin Release 2.0 enables the use of ServiceNow Catalog and Request management modules to: Facilitate deployment options for users Capture requests and route them for approval Invoke Ansible playbooks to auto-deploy server, storage, and networking Zefflin's Plugin 2.0 also provides full integration to ServiceNow Change Management for complete ITIL-compliant auditability. Key features and benefits of the ServiceNow Plugin 2.0 are: Support for AWX: With the help of AWX, customers who are on the open source version of Ansible can easily integrate into ServiceNow. Automated Catalog Variable Creation: Plugin 2.0 reads the target Ansible playbook and automatically creates the input variables in the ServiceNow catalog entry. This significantly reduces implementation time and maintenance effort. This means that the new playbooks can be onboarded in less time. Update to Ansible Job Completion: This extends the amount of information returned from an Ansible playbook and logged into the ServiceNow request. This enhancement dramatically improves the audit trail and provides a higher degree of process control. The ServiceNow Plugin for Ansible enables DevOps with ServiceNow integration by establishing: Standardized development architectures An effective routing approval process An ITIL-compliant audit framework Faster deployment An automated process that frees up the team to focus on other activities Read more about the ServiceNow Plugin in detail on Zefflin System’s official blog post Mastering Ansible – Protecting Your Secrets with Ansible An In-depth Look at Ansible Plugins Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 12554

article-image-google-compute-engine-plugin-makes-it-easy-to-use-jenkins-on-google-cloud-platform
Savia Lobo
15 May 2018
2 min read
Save for later

Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform

Savia Lobo
15 May 2018
2 min read
Google recently announced the Google Compute Engine Plugin for Jenkins, which helps to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP). Jenkins is one of the most popular tools for Continuous Integration(CI), a standard practice carried out by many software organizations. CI assists in automatically detecting changes that were committed to one’s software repositories, running them through unit tests, integration tests and functional tests, to finally create an artifact (JAR, Docker image, or binary). Jenkins helps one to define, build and test a process, then run it continuously against the latest software changes. However, as one scales up their continuous integration practice, one may need to run builds across fleets of machines rather than on a single server. With the Google Compute Engine Plugin, The DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. The plugin automatically deletes one’s unused instances, once work in the build system has slowed down,so that one only pays for the instances needed. One can also configure the Google Compute Engine Plugin to create build instances as Preemptible VMs, which can save up to 80% on per-second pricing of builds. One can attach accelerators like GPUs and Local SSDs to instances to run builds faster. One can configure build instances as per their choice, including the networking. For instance: Disable external IPs so that worker VMs are not publicly accessible Use Shared VPC networks for greater isolation in one’s GCP projects Apply custom network tags for improved placement in firewall rules One can improve security risks present in CI using the Compute Engine Plugin as it uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. One can create an ephemeral build farm in Compute Engine while keeping Jenkins master and other necessary build dependencies behind firewall while using Jenkins on-premises. Read more about the Compute Engine Plugin in detail, on the Google Research blog. How machine learning as a service is transforming cloud Polaris GPS: Rubrik’s new SaaS platform for data management applications Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 12530
article-image-announcing-cloud-build-googles-new-continuous-integration-and-delivery-ci-cd-platform
Vijin Boricha
27 Jul 2018
2 min read
Save for later

Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform

Vijin Boricha
27 Jul 2018
2 min read
In today’s world no software developer is expected to wait for long release time and development cycles, all thanks to DevOps. Cloud which are popular for providing feasible infrastructure across different organizations can now offer better solutions with the help of DevOps. Applications can have bug fixes and updates almost everyday but this update cycles require a CI/CD framework. Google recently released its all new continuous integration/continuous delivery framework Cloud Build at Google Cloud Next ’18 in San Francisco. Cloud Build is a complete continuous integration and continuous delivery platform that helps you build software at scale across all languages. It gives developers complete control over a variety of environments such as VMs, serverless, Firebase or Kubernetes. Google’s Cloud Build supports Docker, giving developers the option of automating deployments to Google Kubernetes Engine or Kubernetes for continuous delivery. It also supports the use of triggers for application deployment which helps launch an update whenever certain conditions are met. Google also tried to eliminate the pain of managing build servers by providing a free version of Cloud Build with up to 120 build minutes per day including up to 10 concurrent builds. After the user has exhausted the first free 120 build minutes, additional build minutes will be charged at $0.0034 per minute. Another plus point of Cloud Build is that it automatically identifies package vulnerabilities before deployment along with allowing users to run builds on local machines and later deploy in the cloud. Incase of issues or problems, CloudBuild provides detailed insights letting you ease debugging via build errors and warnings. It also provides an option of filtering build results using tags or queries to identify time consuming tests or slow performing builds. Key features of Google Cloud Build Simpler and faster commit to deploy time Supports language agnostic builds Options to create pipelines to automate deployments Flexibility to define custom workflow Control build access with Google Cloud security Check out the Google Cloud Blog if you find want to learn more about how to start implementing Google's CI/CD offerings. Related Links Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google’s event-driven serverless platform, Cloud Function, is now generally available Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 12450

article-image-alibaba-cloud-partners-with-sap-to-provide-a-versatile-one-stop-cloud-computing-environment
Savia Lobo
18 Jun 2018
2 min read
Save for later

Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment

Savia Lobo
18 Jun 2018
2 min read
For all those who wish to run their SAP solutions on the cloud, Alibaba has granted this wish for you! At the SAPPHIRE NOW 2018, Alibaba showcased SAP products and solutions on its Cloud platform. Now one can run their SAP solutions on Alibaba Cloud with their choice of Operating System. Alibaba Cloud is among the world's top three IaaS providers according to Gartner. It is also the largest provider of public cloud services in China, according to IDC. It provides a comprehensive suite of cloud computing services to businesses all over the world. This includes merchants with businesses located within Alibaba Group marketplaces, startups, corporations and government organizations. Using Alibaba Cloud’s global infrastructure, enterprises can leverage its robust infrastructure and computing power to achieve greater business value. It has expanded its support to SAP systems by providing: Linux support – SAP HANA, SAP MaxDB, and SAP ASE Windows support – SAP MaxDB, SQL Server to run SAP Business Suite, and other applications on the SAP Application Server ABAP Alibaba has also passed the certification to run SAP Business One HANA on its cloud. The partnership of SAP and Alibaba brings a versatile, one-stop cloud computing environment by Alibaba Cloud's reliable, high-performance and secure infrastructure, interoperating with enterprise-level business application solutions from SAP." With SAP, Alibaba Cloud platform gets an added robust global IT infrastructure and computing strengths. It also delivers enhanced ERP services in cloud environments, which in turn aids enterprises in driving their digital transformation. Read more about this on SAP on Alibaba's cloud official website.
Read more
  • 0
  • 0
  • 12415

article-image-cloud-native-application-bundle-cnab-docker-microsoft-partner-on-an-open-source-cloud-agnostic-all-in-one-packaging-format
Savia Lobo
05 Dec 2018
3 min read
Save for later

Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format

Savia Lobo
05 Dec 2018
3 min read
At Dockercon Europe 2018 held in Barcelona, Microsoft in collaboration with the Docker community announced Cloud Native Application Bundle (CNAB), which is an open-source, cloud-agnostic specification for packaging and running distributed applications. Cloud Native Application Bundle (CNAB) Cloud Native Application Bundle(CNAB) is the combined effort of Microsoft and the Docker community to provide a single all-in-one packaging format, which unifies management of multi-service, distributed applications across different toolchains. Docker is the first to implement CNAB for containerized applications. It plans to expand CNAB across the Docker platform to support new application development, deployment, and lifecycle management. CNAB allows users to define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services. Patrick Chanezon, technical staff at Docker Inc. writes, “Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry.” Docker also plans to enable organizations to deploy and manage CNAB-based applications in Docker Enterprise soon. Scott Johnston, Chief product officer at Docker, said, “this is not a Docker proprietary thing, this is not a Microsoft proprietary thing, it can take Compose files as inputs, it can take Helm charts as inputs, it can take Kubernetes YAML as inputs, it can take serverless artifacts as inputs.” According to Microsoft, they partnered with Docker to solve issues with ISV (Independent Software Vendor) and enterprises including: To be able to describe their application as a single artifact, even when it is composed of a variety of cloud technologies Wanting to provision their applications without having to master dozens of tools They needed to manage lifecycle (particularly installation, upgrade, and deletion) of their applications Added features that CNAB brings include: Manage discrete resources as a single logical unit that comprises an app. Use and define operational verbs for lifecycle management of an app Sign and digitally verify a bundle, even when the underlying technology doesn’t natively support it. Attest and digitally verify that the bundle has achieved that state to control how the bundle can be used. Enable the export of the bundle and all dependencies to reliably reproduce in another environment, including offline environments (IoT edge, air-gapped environments). Store bundles in repositories for remote installation. According to a user review on Hacker News thread, “The goal with CNAB is to be able to version your application with all of its components and then ship that as one logical unit making it reproducible. The package format is flexible enough to let you use the tooling that you're already using”. Another user said, “CNAB makes reproducibility possible by providing unified lifecycle management, packaging, and distribution. Of course, if bundle authors don't take care to work around problems with imperative logic, that's a risk.” To know more about Cloud Native Application Bundle(CNAB) in detail, visit Microsoft blog. Microsoft and Mastercard partner to build a universally-recognized digital identity Creating a Continuous Integration commit pipeline using Docker [Tutorial] Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login
Read more
  • 0
  • 0
  • 12353
article-image-autodesk-acquires-plangrid-for-875-million-to-digitize-and-automate-construction-workflows
Savia Lobo
21 Nov 2018
3 min read
Save for later

Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows

Savia Lobo
21 Nov 2018
3 min read
Yesterday, Autodesk, a software corporation for the architecture, engineering, construction, and manufacturing, announced that it has acquired the leading provider of construction productivity software, PlanGrid for $875 million net of cash. The transaction is expected to close during Autodesk's fourth quarter of fiscal 2019, which is, ending January 31, 2019. With this acquisition of the San Francisco based startup, Autodesk will be able to offer more comprehensive, cloud-based construction platform. PlanGrid software, launched in 2011, gives builders real-time access to project plans, punch lists, project tasks, progress photos, daily field reports, submittals and more. Autodesk’s CEO, Andrew Anagnost, said, “There is a huge opportunity to streamline all aspects of construction through digitization and automation. The acquisition of PlanGrid will accelerate our efforts to improve construction workflows for every stakeholder in the construction process.” According to TechCrunch, “The company, which is a 2012 graduate of Y Combinator, raised just $69 million, so this appears to be a healthy exit for them.” In an interview with CEO and co-founder Tracy Young in 2015 at TechCrunch Disrupt in San Francisco, she had said, “the industry was ripe for change. The heart of construction is just a lot of construction blueprints information. It’s all tracked on paper right now and they’re constantly, constantly changing”. When Tracy started the idea in 2011, her idea was to move all that paper to the cloud and display it on an iPad. According to Tracy, “At PlanGrid, we have a relentless focus on empowering construction workers to build as productively as possible. One of the first steps to improving construction productivity is the adoption of digital workflows with centralized data. PlanGrid has excelled at building beautiful, simple field collaboration software, while Autodesk has focused on connecting design to construction. Together, we can drive greater productivity and predictability on the job site.” Jim Lynch, Construction General Manager at Autodesk, said, "We'll integrate workflows between PlanGrid's software and both Autodesk Revit software and the Autodesk BIM 360 construction management platform, for a seamless exchange of information between all project members." Autodesk and PlanGrid have developed complementary construction integration ecosystems using which customers can connect other software applications. The acquisition is expected to expand the integration partner ecosystem, giving customers a customizable platform to test and scale new ways of working. To know more about this news in detail, visit Autodesk’s official press release. IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car? Plotly releases Dash DAQ: a UI component library for data acquisition in Python
Read more
  • 0
  • 0
  • 12306

article-image-kubernetes-1-13-released-with-new-features-and-fixes-to-a-major-security-flaw
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Kubernetes 1.13 released with new features and fixes to a major security flaw

Prasad Ramesh
04 Dec 2018
3 min read
A privilege escalation flaw in Kubernetes was discussed on GitHub last week. Following that, Red Hat released patches for the same. Yesterday Kubernetes 1.13 was also released. The security flaw A recent GitHub issue outlines the issue. Named as CVE-2018-1002105, this issue allowed unauthorized users to craft special requests. This let the unauthorized users establish a connection to a backend server via the Kubernetes API. This let sending arbitrary requests over the same connection directly to the backend. Following this, IBM owned Red Hat released patches for this vulnerability yesterday. All Kubernetes based products are affected by this vulnerability. It has now been patched and as the impact is classified as critical by Red Hat, a version upgrade is strongly recommended if you’re running an affected product. You can find more details at the Red Hat website. Let’s now look at the new features in Kubernetes 1.13 other than the security patch. kubeadm is GA in Kubernetes 1.13 kubeadm is an essential tool for managing the lifecycle of a cluster, right from creation to configuration to upgrade. kubeadm is now officially GA. This tool handles bootstrapping of production clusters on current hardware and configuration of core Kubernetes components. With the GA release, advanced features are available around pluggability and configurability. kubeadm is aimed to be a toolbox for both admins and automated, higher-level systems. Container Storage Interface (CSI) is also GA The Container Storage Interface (CSI) is generally available in Kubernetes 1.13. It was introduced as alpha in Kubernetes 1.9 and beta in Kubernetes 1.10. CSI makes the Kubernetes volume layer truly extensible. It allows third-party storage providers to write plugins that interoperate with Kubernetes without having to modify the core code. CoreDNS replaces Kube-dns as the default DNS Server CoreDNS is replacing Kube-dns to be the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server. It provides an extensible backwards-compatible integration with Kubernetes. CoreDNS is a single executable and a single process. It supports flexible use cases by creating custom DNS entries and is written in Go making it memory-safe. KubeDNS will be supported for at least one more release. Other than these there are also other feature updates like support for 3rd party monitoring, and more features graduating to stable and beta. For more details, on the Kubernetes release, visit the Kubernetes website. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 12253
Modal Close icon
Modal Close icon