Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-kata-containers-1-5-released-with-firecracker-support-integration-improvements-and-ibm-z-series-support
Melisha Dsouza
24 Jan 2019
3 min read
Save for later

Kata Containers 1.5 released with Firecracker support, integration improvements and IBM Z series support

Melisha Dsouza
24 Jan 2019
3 min read
Yesterday, Kata Containers 1.5 was released with a host of updates like preliminary support for the Firecracker hypervisor, s390x architecture support, and significant integration improvements! Kata Containers is an open source project and community building a standard implementation of lightweight Virtual Machines (VMs) that perform like containers and provide the workload isolation and security advantages of Virtual machines. The project is managed by The OpenStack Foundation and combines the technology from Intel® Clear Containers and Hyper runV. Features of Kata Containers 1.5 #1 Firecracker support Eric Ernest, an architecture committee member for Kata Containers project, states that the Kata Containers project was designed “to support multiple hypervisor solutions.” The new Firecracker support introduced in this update aims to do just that. At the Amazon re:Invent conference 2018, the AWS team released ‘Firecracker’ that they explained to be a new Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker enables service owners to operate secure multi-tenant container-based services while combining the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker can be used in Kata Containers 1.5 for feature constrained workloads, while using the QEMU when working with more advanced workloads. The blog also mentions a small limitation of the Kubernetes functionality when using Kata+Firecracker. The inability to dynamically adjust memory and CPU definitions for a pod and Firecrackers support for only block-based storage drivers and volumes gives rise to the requirement of devicemapper. This is available in Kubernetes + CRI-O and Docker version 18.06. Users can expect more storage driver options soon. Check out this screencast for an example of Kata configured in CRIO+K8S, utilizing both QEMU and Firecracker. You can head over to GitHub to understand how to get started quickly with Kata + runtimeClass in Kubernetes. #2 s390x architecture support Kata Containers 1.5 adds IBM Z-Series support. According to CIO, IBM Z platform includes notable security features. It has a proprietary ASIC on-chip hardware dedicated specifically for cryptographic processes, enabling all-encompassing encryption. This keeps data always encrypted except when that data is being processed. Data is only decrypted during computations before it is encrypted again. #3 containerd integration The 1.5 release simplifies how Kata Containers integrate with containerd. Following the discussion last year to add a shim API to containerd, the 1.5 release includes an initial implementation meeting this shim API. Eric Ernest , an architecture committee member for Kata Containers project, says the API  will result in a better interface to Kata Containers and provide the ability to directly access container level statistics from the Kata runtime. TheKata team plans to have several presentations on this topic at the Open Infrastructure Summit in Denver, April 29- May 1, 2019. You can head over to Eric’s blog for more insights on this announcement or head over to AWS blog to know more about the Firecracker support for Kata 1.5. CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]  
Read more
  • 0
  • 0
  • 12002

article-image-amazon-reinvent-day-3-lamba-layers-lambda-runtime-api-and-other-exciting-announcements
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Melisha Dsouza
30 Nov 2018
4 min read
The second last day of Amazon re:Invent 2018 ended on a high note. AWS announced two new features, Lambda Layers, and Lambda Runtime API, that claim to “make serverless development even easier”. In addition to this, they have also announced that Application Load Balancers will now invoke Lambda functions to serve HTTP(S) requests and Ruby Language support for Lambda. #1 Lambda Layers Lambda Layers allow developers to centrally manage code and data which is shared across multiple functions. Instead of packaging and deploying this shared code together with all the functions using it, developers can put common components in a ZIP file and upload it as a Lambda Layer.  These Layers can be used within an AWS account, shared between accounts, or shared publicly within the developer community. AWS  is also publishing a public layer which includes NumPy and SciPy. This layer is prebuilt and optimized to help users to carry out data processing and machine learning applications quickly. Developers can include additional files or data for their functions including binaries such as FFmpeg or ImageMagick, or dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. Layers can also be versioned to manage updates, which will make each version immutable. When a version is deleted or its permissions are revoked, a developer won’t be able to create new functions; however, functions that used it previously will continue to work. Lamba layers helps in making the function code smaller and more focused on what the application has to build. In addition to faster deployments, because less code must be packaged and uploaded, code dependencies can be reused. #2 Lambda Runtime API This is a simple interface to use any programming language, or a specific language version, for developing functions. Here, runtimes can be shared as layers, which allows developers to work with a  programming language of their choice when authoring Lambda functions. Developers using the Runtime API will have to bundle the same with their application artifact or as a Lambda layer that the application uses. When creating or updating a function, users can select a custom runtime. The function must include (in its code or in a layer) an executable file called bootstrap, that will be responsible for the communication between code and the Lambda environment. As of now, AWS has made the C++ and Rust open source runtimes available. The other open source runtimes that will possibly be available soon include: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) Node.js (NodeSource N|Solid) PHP (Stackery) The Runtime API will depict how AWS will support new languages in Lambda. A notable feature of the C++ runtime is its simplicity and expressiveness of interpreted languages while maintaining a good performance and low memory footprint. The Rust runtime makes it easy to write highly performant Lambda functions in Rust. #3 Application Load Balancers to invoke Lambda functions to serve HTTP(S) requests This new functionality will enable users to access serverless applications from any HTTP client, including web browsers. Users can also route requests to different Lambda functions based on the requested content. Application Load Balancer will be used as a common HTTP endpoint to both simplify operations and monitor applications that use servers and serverless computing. #4 Ruby is now a supported language for AWS Lambda Developers can use Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default making it easy and quick for functions to directly interact with the AWS resources directly. Ruby on Lambda can be used either through the AWS Management Console or the AWS SAM CLI. This will ensure developers benefit from the reduced operational overhead, scalability, availability, and pay-per-use pricing of Lambda. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer  
Read more
  • 0
  • 0
  • 11992

article-image-gitlab-is-moving-from-azure-to-google-cloud
Richard Gall
26 Jun 2018
2 min read
Save for later

GitLab is moving from Azure to Google Cloud in July

Richard Gall
26 Jun 2018
2 min read
In a switch that contains just a subtle hint of saltiness, GitLab has announced that it is to move its code repositories from Microsoft Azure to Google Cloud on Saturday, July 28, 2018. The news comes just weeks after Microsoft revealed it was to acquire GitHub (this happened in early June if you've lost track of time). While it's tempting to see this as a retaliatory step, it is instead just a coincidence. The migration was planned before the Microsoft and GitHub news was even a rumor. Why is GitLab moving to Google Cloud? According to GitLab's Andrew Newdigate, the migration to Google Cloud is being done in a bid to "improve performance and reliability." In a post on the GitLab blog, Newdigate explains that one of the key drivers of the team's decision is Kubernetes. "We believe Kubernetes is the future. It's a technology that makes reliability at massive scale possible." Kubernetes is a Google product, so it makes sense for GitLab to make the switch to Google's cloud offering to align their toolchain. Read next: The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab How GitLab's migration will happen A central part of the GitLab migration is Geo. Geo is a tool built by GitLab that makes cloning and reproducing repositories easier for developers working in different locations. Essentially, it creates 'mirrors' of GitLab instances. That's useful for developers using GitLab, as it provides extra safety and security, but GitLab are using it themselves for the migration. [caption id="attachment_20323" align="aligncenter" width="300"] Image via GitLab[/caption] Newdigate writes that GitLab has been running a parallel site that is running on Google Cloud Platform as the migration unfolds. This contains  an impressive "200TB of Git data and 2TB of relational data in PostgreSQL." Rehearsing the failover in production Coordination and planning is everything when conducting such a substantial migration. That's why GitLab's Geo, Production, and Quality teams meet several times a week to rehearse the failover. This process has a number of steps, and each time, every step throws up new issues and problems. These are then documented and resolved by the relevant team. Given confidence and reliability is essential to any version control system, building this into the migration process is a worthwhile activity.
Read more
  • 0
  • 0
  • 11931

article-image-amazon-reinvent-announces-amazon-dynamodb-transactions-cloudwatch-logs-insights-and-cloud-security-conference-amazon-reinforce-2019
Melisha Dsouza
28 Nov 2018
4 min read
Save for later

Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019

Melisha Dsouza
28 Nov 2018
4 min read
Day 2 of the Amazon AWS re:Invent 2018 conference kicked off with just as much enthusiasm with which it began. With some more announcements and releases scheduled for the day, the conference is proving to be a real treat for AWS Developers. Amongst announcements like Amazon Comprehend Medical, New container products in the AWS marketplace; Amazon also announced Amazon DynamoDB Transactions and Amazon CloudWatch Logs Insights. We will also take a look at Amazon re:Inforce 2019 which is a new conference solely to be launched for cloud security. Amazon DynamoDB Transactions Customers have used Amazon DynamoDB for multiple use cases, from building microservices and mobile backends to implementing gaming and Internet of Things (IoT) solutions. Amazon DynamoDB is a non-relational database delivering reliable performance at any scale. It offers built-in security, backup and restore, and in-memory caching along with being a fully managed, multi-region, multi-master database that provides consistent single-digit millisecond latency. DynamoDB with native support for transactions will now help developers to easily implement business logic that requires multiple, all-or-nothing operations across one or more tables. With the help of DynamoDB transactions, users can take advantage of the atomicity, consistency, isolation, and durability (ACID) properties across one or more tables within a single AWS account and region. It is the only non-relational database that supports transactions across multiple partitions and tables. Two new DynamoDB operations have been introduced for handling transactions: TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. It can optionally check for prerequisite conditions that need to be satisfied before making updates. TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If this request is issued on an item that is part of an active write transaction, the read transaction is canceled. Amazon CloudWatch Logs Insights Many AWS services create logs. Data points, patterns, trends, and insights embedded within these logs can be used to understand how an applications and a users AWS resources are behaving, identify room for improvement, and to address operational issues. However, the raw logs have a huge size, making analysis difficult. Considering individual AWS customers routinely generate 100 terabytes or more of log files each day, the operations become complex and time-consuming. Enter CloudWatch Logs Insights designed to work at cloud scale, without any setup or maintenance required. It goes through massive logs in seconds and provides a user with fast, interactive queries and visualizations. CloudWatch Logs Insights includes a sophisticated ad-hoc query language, with commands to perform complicated operations efficiently. It is a fully managed service and can handle any log format, and auto-discovers fields from JSON logs. What's more? Users can also visualize query results using line and stacked area charts, and add queries to a CloudWatch Dashboard. AWS re:Inforce 2019 In addition to these releases, Amazon also announced that AWS is launching a conference dedicated to cloud security called ‘AWS re:Inforce’, for the very first time. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Exhibit and Conference Center. Here is what the AWS re:Inforce 2019 conference is expected to cover: Deep dive into the latest approaches to security best practices and risk management utilizing AWS services, features, and tools. Direct access to customers to the latest security research and trends from subject matter experts, along with the opportunity to participate in hands-on exercises with our services. There are multiple learning tracks to be covered over this 2-day conference including a technical track and business enablement track, designed to meet the needs of security and compliance professionals, from C-suite executives to security engineers, developers, risk and compliance officers, and more. The conference will also feature sessions on Identity & Access Management, Infrastructure Security, Detective Controls, Governance, Risk & Compliance, Data Protection & Privacy, Configuration & Vulnerability Management, and much more. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer
Read more
  • 0
  • 0
  • 11903

article-image-google-announces-the-beta-version-of-cloud-source-repositories
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Google announces the Beta version of Cloud Source Repositories

Melisha Dsouza
21 Sep 2018
3 min read
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud. The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search. These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft. How does Google code search work? Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field. Solution to common code search challenges #1 To execute searches across all the code at ones’ company If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster. #2 To search for code that performs a common operation Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code. #3 If a developer cannot remember the right way to use a common code component Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers. #4 Issues with production application  If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered. All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users. You can read more about Cloud Source Repositories in the official documentation. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Google to allegedly launch a new Smart home device Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 11690

article-image-aws-sam-aws-serverless-application-model-is-now-open-source
Savia Lobo
24 Apr 2018
2 min read
Save for later

AWS SAM (AWS Serverless Application Model) is now open source!

Savia Lobo
24 Apr 2018
2 min read
AWS recently announced that  SAM (Serverless Application Model) is now open source. With AWS SAM, one can define serverless applications in a simple and clean syntax. The AWS Serverless Application Model extends AWS CloudFormation and provides a simplified way of defining the Amazon Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. AWS SAM comprises of: the SAM specification Code translating the SAM templates into AWS CloudFormation Stacks General Information about the model Examples of common applications The SAM specification and implementation are open sourced under the Apache 2.0 license for AWS partners and customers to adopt and extend within their own toolsets. The current version of the SAM specification is available at AWS SAM 2016-10-31. Basic steps to create a serverless application with AWS SAM Step 1: Create a SAM template, a JSON or YAML configuration file that describes Lambda functions, API endpoints and the other resources in your application. Step 2: Test, upload, and deploy the application using the SAM Local CLI. During deployment, SAM automatically translates the application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to set-up for any Lambda functions. To learn more about how to define and deploy serverless applications, read the How-To Guide and see examples. One can build serverless applications faster and further simplify one’s development of serverless applications by defining new event sources, new resource types, and new parameters within SAM. One can also modify SAM in order to integrate it with other frameworks and deployment providers from the community for building serverless applications. For more in-depth knowledge, read AWS SAM development guide on GitHub  
Read more
  • 0
  • 0
  • 11673
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 11646

article-image-cncf-accepts-cloud-native-buildpacks-to-the-cloud-native-sandbox
Sugandha Lahoti
04 Oct 2018
2 min read
Save for later

CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox

Sugandha Lahoti
04 Oct 2018
2 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) accepted Cloud Native Buildpacks (CNB) into the CNCF Sandbox. With this collaboration, Buildpacks will be able to leverage the vendor neutrality of CNCF to leverage cloud native virtues. The Cloud Native Buildpacks project was initiated by Pivotal and Heroku in January 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract. This project incorporates learnings from maintaining production-grade buildpacks at both Pivotal and Heroku. What are Cloud Native Buildpacks? At the high level, Cloud Native Buildpacks turn source code into production ready Docker images that are OCI image compatible. This gives users more options to customize runtime while making their apps portable. Buildpacks minimize initial time to production thus reducing the operational burden on developers, and supports enterprise operators who manage apps at scale. Buildpacks were first created by Heroku in 2011. Since then, they have been adopted by Cloud Foundry as well as Gitlab, Knative, Microsoft, Dokku, and Drie. The Buildpack API was open sourced in 2012 with Heroku-specific elements removed. This was done to make sure that each vendor that adopted buildpacks evolved the API independently, which led to isolated ecosystems. As a part of the Cloud Native Sandbox project, the Buildpack API is standardized for all platforms. They are also opening the tooling they work with and will run buildpacks under the Buildpack GitHub organization. “Anyone can create a buildpack for any Linux-based technology and share it with the world. Buildpacks’ ease of use and flexibility are why millions of developers rely on them for their mission critical apps,” said Joe Kutner, architect at Heroku. “Cloud Native Buildpacks will bring these attributes inline with modern container standards, allowing developers to focus on their apps instead of their infrastructure.” Developers can start using Cloud Native Buildpacks by forking one of the Buildpack Samples. You can also read up on the implementation specifics laid out in the Buildpack API documentation. CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project.
Read more
  • 0
  • 0
  • 11551

article-image-redhats-operatorhub-io-makes-it-easier-for-kuberenetes-developers-and-admins-to-find-pre-tested-operators-for-applications
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications

Melisha Dsouza
01 Mar 2019
2 min read
Last week, Red Hat launched OperatorHub.io in collaboration with Microsoft, Google Cloud, and Amazon Web Services, as a “public registry” for finding services backed by the Kubernetes Operator. According to the RedHat blog, the Operator pattern automates infrastructure and application management tasks using Kubernetes as the automation engine. Developers have shown a growing interest in Operators owing to features like accessing automation advantages of public cloud, enable the portability of the services across Kubernetes environments, and much more. RedHat also comments that the number of Operators available has increased but it is challenging for developers and Kubernetes administrators to find available Operators that meet their quality standards. To solve this challenge, they have come up with OperatorHub.io. Features of OperatorHub.io OperatorHub.io is a common registry to “publish and find available Operators”. This is a curation of Operator-backed services for a base level of documentation. It also includes active communities or vendor-backing to show maintenance commitments, basic testing, and packaging for optimized life-cycle management on Kubernetes. The platform will enable the creation of more Operators as well as an improvement to existing Operators. This is a centralized repository that helps users and the community to organize around Operators. Operators can be successfully listed on OperatorHub.io only when then show cluster lifecycle features and packaging that can be maintained through the Operator Framework’s Operator Lifecycle Management, along with acceptable documentation for intended users. Operators that are currently listed in OperatorHub.io include Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, MongoDB Enterprise Operator and many more. This news has been accepted by the Kubernetes community with much enthusiasm. https://twitter.com/mariusbogoevici/status/1101185896777281536 https://twitter.com/christopherhein/status/1101184265943834624 This is not the first time that RedHat has tried to build on the momentum for the Kubernetes Operators. According to TheNewStack, last year, the company acquired CoreOS last year and went on to release Operator Framework, an open source toolkit that “provides an SDK, lifecycle management, metering, and monitoring capabilities to support Operators”. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover  
Read more
  • 0
  • 0
  • 11522

article-image-googles-second-innings-in-china-exploring-cloud-partnerships-with-tencent-and-others
Bhagyashree R
07 Aug 2018
3 min read
Save for later

Google’s second innings in China: Exploring cloud partnerships with Tencent and others

Bhagyashree R
07 Aug 2018
3 min read
Google with the aims of re-entering the Chinese market, is in talks with the top companies in China like Tencent Holdings Ltd. (the company which owns the popular social media site, WeChat) and Inspur Group. Its aim is to expand their cloud services in the second-largest economy. According to some people who are familiar with the ongoing discussion, the talks began in early 2018 and Google was able to narrow down to three firms in late March. But because of the US - China trade war there is an uncertainty, whether this will materialize or not. Why is Google interested in cloud partnerships with Chinese tech giants? In many countries, Google rents computing power and storage over the internet and sells the G Suite, which includes Gmail, Docs, Drive, Calendar, and more tools for business. These run on their data centers. It wants to collaborate with the domestic data center and server providers in China to run internet-based services as China requires the digital information to be stored in the country. This is the reason why they need to partner with the local players. A tie-up with large Chinese tech firms, like Tencent and Inspur will also give Google powerful allies as it attempts a second innings in China after its earlier exit from the country in 2010. A cloud partnership with China will help them compete with their rivals like Amazon and Microsoft. With Tencent by their side, it will be able to go up against the local competitors including Alibaba Group Holding Ltd. How Google has been making inroads to China in the recent past In December, Google launched its AI China Center, the first such center in Asia, at the Google Developer Days event in Shanghai. In January Google agreed to a patent licensing deal with Tencent Holdings Ltd. This agreement came with an understanding that the two companies would team up on developing future technologies. Google could host services on Tencent’s data centers and the company could also promote its services to their customers. Reportedly, to expand its boundaries to China, Google has agreed upon launching a search engine which will comply with the Chinese cybersecurity regulations. A project code-named Dragonfly has been underway since spring of 2017, and accelerated after the meeting between its CEO Sundar Pichai and top Chinese government official in December 2017. It has  launched a WeChat mini program and reportedly developing an news app for China. It’s building a cloud data center region in Hong Kong this year. Joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo, this will be the sixth GCP region in Asia Pacific. With no official announcements, we can only wait and see what happens in the future. But from the above examples, we can definitely conclude that Google is trying to expand its boundaries to China, and that too in full speed. To know more about this recent Google’s partnership with China in detail, you can refer to the full coverage on the Bloomberg’s report. Google to launch a censored search engine in China, codenamed Dragonfly Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 11505
article-image-google-kubernetes-engine-1-10-is-now-generally-available-and-ready-for-enterprise-use
Savia Lobo
04 Jun 2018
3 min read
Save for later

Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use

Savia Lobo
04 Jun 2018
3 min read
Google recently announced that their Google Kubernetes Engine 1.10 is now generally available and is also ready for enterprise use. For a prolonged time, enterprises have faced challenges such as security, networking, logging, and monitoring. With the availability of Kubernetes Engine 1.10, Google has introduced new and exciting features that have a built-in robust security for enterprise use, which are: Shared Virtual Private Cloud (VPC): This enables better control of network resources Regional Persistent Disks and Regional Clusters: These ensure higher-availability and stronger SLAs. Node Auto-Repair GA and Custom Horizontal Pod Autoscaler: These can be used for greater automation. New features in the Google Kubernetes Engine 1.10 Networking One can deploy workloads in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model. This gives you the flexibility to manage access to shared network resources using IAM permissions while still isolating departments. Shared VPC lets organization administrators assign administrative responsibilities, such as creating and managing instances and clusters, to service project admins while maintaining centralized control over network resources like subnets, routers, and firewalls. Shared VPC network in the Kubernetes engine 1.10 Storage This will make it easy to build highly available solutions. The Kubernetes Engine will provide support for the new Regional Persistent Disk (Regional PD). Regional PD enables a persistent network-attached block storage with synchronous replication of data between two zones within a region. One does not have to worry about application-level replication and can take advantage of replication at the storage layer, with the help of Regional PDs. This kind of replication offers a convenient building block using which one can implement highly available solutions on Kubernetes Engine. Reliability Regional clusters, which would be made available in some time soon, allow one to create a Kubernetes Engine cluster with a multi-master, highly-available control plane. This cluster would spread the masters across three zones in a region, which is an important feature for clusters with higher uptime requirements. Regional clusters also offer a zero-downtime upgrade experience when upgrading Kubernetes Engine masters. The node auto-repair feature is now generally available. It monitors the health of the nodes in one’s cluster and repairs nodes that are unhealthy. Auto-scaling In Kubernetes Engine 1.10, Horizontal Pod Autoscaler supports three different custom metrics types in beta: External - For scaling based on Cloud Pub/Sub queue length Pods - For scaling based on the average number of open connections per pod Object - For scaling based on Kafka running in the cluster To know more about the features in detail, visit the Google Blog. Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Kubernetes Containerd 1.1 Integration is now generally available Rackspace now supports Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 11434

article-image-epicor-partners-with-microsoft-azure-to-adopt-cloud-erp
Savia Lobo
29 May 2018
2 min read
Save for later

Epicor partners with Microsoft Azure to adopt Cloud ERP

Savia Lobo
29 May 2018
2 min read
Epicor Software Corporation recently announced its partnership with Microsoft Azure to accelerate its Cloud ERP adoption. This partnership further aims at delivering Epicor’s enterprise solutions on the Microsoft Azure platform. The company plans to deploy its Epicor Prophet 21 enterprise resource planning (ERP) suite on Microsoft Azure. This would enable customers a faster growth and innovation as they look forward to digitally transform their business with the reliable, secure, and scalable features of Microsoft Azure. With the Epicor and Microsoft collaboration customers can now access the power of Epicor ERP and Prophet 21 running on Microsoft Azure. Having Microsoft as a partner, Epicor, Leverages a range of technologies such as Internet of Things (IoT), Artificial Intelligence (AI), and machine learning (ML) to deliver ready-to-use, accurate solutions for mid-market manufacturers and distributors. Plans to explore Microsoft technologies for advanced search, speech-to-text, and other use cases to deliver modern human/machine interfaces that improve productivity for customers. Steve Murphy, CEO, Epicor said that,”Microsoft’s focus on the ‘Intelligent Cloud’ and ‘Intelligent Edge’ complement our customer-centric focus”. He further stated that after looking at several cloud options, they felt Microsoft Azure offers the best foundation for building and deploying enterprise business applications that enables customers’ businesses to adapt and grow. As most of the prospects these days ask about Cloud ERP, Epicor says that by deploying such a model they would be ready to offer their customers the ability to move onto cloud with the confidence that Microsoft Azure offers. Read more about this in detail on Epicor’s official blog. Rackspace now supports Kubernetes-as-a-Service How to secure an Azure Virtual Network What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 11395

article-image-platform9-announces-a-new-release-of-fission-io-the-open-source-kubernetes-native-serverless-framework
Sugandha Lahoti
16 Oct 2018
3 min read
Save for later

Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework

Sugandha Lahoti
16 Oct 2018
3 min read
Platform9 is announcing a new release of Fission.io, the open source, Kubernetes-native Serverless framework.  It’s new features enable developers and IT Operations to improve the quality and reliability of serverless applications. Fission comes with built-in Live-reload and Record-replay capabilities to simplify testing and accelerate feedback loops. Other new features include Automated Canary Deployments to reduce the risk of failed releases, Prometheus integration for automated monitoring and alerts, and fine-grained cost and performance optimization capabilities. With this latest release, Fission also allows Dev and Ops teams to safely adopt Serverless and benefit from the speed, cost savings, and scalability of this cloud-native development pattern on public cloud or on-premises. Let’s look at the features in detail. Live-reload: Test as you type With Live-reload, Fission automatically deploys the code as it is written into a live Kubernetes test cluster. It allows developers to toggle between their development environment and the runtime of the function, to rapidly iterate through their coding and testing cycles. Record-replay: Simplify testing and debug (Image via Fission) Record-replay automatically saves events that trigger serverless functions and allows for the replaying of these events on demand. Record-replay can also reproduce complex failures during testing or debugging, simplify regression testing, and troubleshoot issues. Operations teams can use recording on a subset of live production traffic to help engineers reproduce issues or verify application updates. Automated Canary Deployments: Reduce the risk of failed releases Fission provides fully automated Canary Deployments that are easy to configure. With AutomatedCanary Deployments, it automatically increments traffic proportions to the newer version of the function as long as it succeeds and rolls back to the old version if the new version fails. Prometheus Integration: Easy metrics collection and alerts Integration with Prometheus enables automatic aggregation of function metrics, including the number of functions called, function execution time, success, failures, and more. Users can also define custom alerts for key events, such as for when a function fails or takes too long to execute. Prometheus metrics can also feed monitoring dashboards to visualize application metrics. (Image via Fission) One of Fission’s users Kenneth Lam, Director of Technology at Snapfish said, “Fission allows our company to benefit from the speed, cost savings and scalability of a cloud-native development pattern on any environment we choose, whether it be the public cloud or on-prem.” You can learn more about Fission on its website. You can also go through a quick demo of all the new features in Fission. How to deploy Serverless Applications in Go using AWS Lambda [Tutorial]. Azure Functions 2.0 launches with better workload support for serverless. How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 11370
article-image-amazon-reinvent-2018-aws-key-management-service-kms-custom-key-store
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store

Sugandha Lahoti
27 Nov 2018
3 min read
At the ongoing Amazon re:Invent 2018, Amazon announced that AWS Key Management Service (KMS) has integrated with AWS CloudHSM. Users now have the option to create their own KMS custom key store. They can generate, store, and use their KMS keys in hardware security modules (HSMs) through the KSM. The KMS customer key store satisfies compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs). It supports AWS services and encryption toolkits that are integrated with KMS. Previously, AWS CloudHSM was not widely integrated with other AWS managed services. So, if someone required direct control of their HSMs but still wanted to use and store regulated data in AWS managed services, they had to choose between changing those requirements, not using a given AWS service, or building their own solution. With custom key store, users can configure their own CloudHSM cluster and authorize KMS to use it as a dedicated key store for keys rather than the default KMS key store. On using a KMS CMK in a custom key store, the cryptographic operations under that key are performed exclusively in the developer’s own CloudHSM cluster. Master keys that are stored in a custom key store are managed in the same way as any other master key in KMS and can be used by any AWS service that encrypts data and that supports KMS customer managed CMKs. The use of a custom key store does not affect KMS charges for storing and using a CMK. However, it does come with an increased cost and potential impact on performance and availability. Things to consider before using a custom key store Each custom key store requires the CloudHSM cluster to contain at least two HSMs. CloudHSM charges vary by region and the pricing comes to at least $1,000 per month, per HSM, if each device is permanently provisioned. The number of HSMs determines the rate at which keys can be used. Users should keep in mind the intended usage patterns for their keys and ensure appropriate provisioning of HSM resources. The number of HSMs and the use of availability zones (AZs) impacts the availability of a cluster. Configuration errors may result in a custom key store being disconnected, or key material being deleted. Users need to manually setup HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which users should have the appropriate resources and organizational controls in place. Read more about the KMS custom key stores on Amazon. How Amazon is reinventing Speech Recognition and Machine Translation with AI AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources.
Read more
  • 0
  • 0
  • 11344

article-image-storj-labs-new-open-source-partner-program-to-generate-revenue-opportunities-for-open-source-companies
Melisha Dsouza
30 Aug 2018
3 min read
Save for later

Storj Labs’ new Open Source Partner Program: to generate revenue opportunities for open source companies

Melisha Dsouza
30 Aug 2018
3 min read
At the Linux Foundation's Open Source Summit in Vancouver, Storj Labs a leader in decentralized cloud storage company, launched their ‘Open Source Partner Program’. This program will enable open-source projects to generate revenue when their users store data in the cloud. The program was launched with the aim to bridge the "major economic disconnect between the 24-million total open-source developers and the $180 billion cloud market" as stated by Ben Golub, Storj's executive chairman and interim CEO. How does the Open Source Partner program work? Open-source projects simply need to integrate Storj into their existing cloud application infrastructure. Since Storj uses an Amazon Web Services (AWS) S3 compliant interface, this integration should be easy. Storj provides a blockchain encrypted, distributed cloud storage with facilitates data security, improves reliability, and enhances performance when compared to traditional cloud storage approaches. Using client-side encryption ensures that data can only be accessed by the data owners. While harvesting all these benefits, open-source projects that will use the Storj network will be provided with a continuous revenue stream. 60% of its gross revenue will be given to its storage farmers and 40% will be split amongst open-source developers. Through simple Storj data connectors that will be integrated with their platforms, Storj can track data storage usage. Partners will be given help desk support and tools to test the network's performance and capabilities. What’s in it for open source companies? Monetization has always been a challenge for open source companies. They ultimately require revenue to sustain themselves. Open source drives a sizable majority of the $200 billion-plus cloud computing market which is inversely proportional to the revenue that currently makes its way directly back to their projects and companies. The ‘Open Source Partner Program’ will help open source companies to grow exponentially and meet other financial-related goals.  Ultimately, open source companies - even the ones that only provide free products - require revenue to sustain themselves, and the Storj Open Source Partner Program aims to help. What’s in it for Storj? While this revenue generation program will benefit open source companies, it can also be viewed as an effective marketing strategy for Storj.  Open source projects are all the rage these days and the more these companies turn to Storj for decentralized cloud-based solutions, the more popularity and recognition Storj gets. Storj, as well as open source companies, realize the importance of openness, decentralization, and broad-based individual empowerment, which is why this program strikes the perfect balance to support open source projects. The Storj Labs has already won over ten major open-source partners, including Confluent, Couchbase, FileZilla, MariaDB, MongoDB, and Nextcloud, to join its Open Source Partner Program. These partners will be given early, immediate access to the V3 network private alpha. You can get a complete overview of the program on Storj’s blog post. 5 reasons why your business should adopt cloud computing Demystifying Clouds: Private, Public, and Hybrid clouds Google’s second innings in China: Exploring cloud partnerships with Tencent and others
Read more
  • 0
  • 0
  • 11306
Modal Close icon
Modal Close icon