I
recently
joined Chris Presley for his podcast,
Cloudscape
, to talk about what’s happening in the world of cloud-related matters. I shared the most recent events surrounding Google.
Topics of discussion included:
New features & data platform:
- Cloud Tasks: A Task Queue service for App Engine Flex
- Private networking connection for Cloud SQL
- Announcing general availability of Cloud Memorystore for Redis
- Cloud Bigtable regional replication now generally available
- Cloud Speech-to-Text and the general availability of Cloud Text-to-Speech
- New Cloud Source Repositories features:
- Cloud Inference API: uncover insights from large scale, typed time-series data
Other GCP platform updates
- GCP Support models: Role-Based and Enterprise
- Cisco Hybrid Cloud Platform for Google Cloud: Now generally available
- Cloud TPUs in Kubernetes Engine
- Tesla V100 GPU now GA
- Access Transparency logs now generally available for six GCP services
Cloud tasks: a task queue service for app engine flex
Cloud Tasks is an interesting feature that came out from Google Cloud Platform (GCP). Task queues have always existed in different application development frameworks and platforms. However, with Cloud Tasks, they have created a managed service to enable task queues or applications that you would develop on the Google app engine.
Essentially, by having the managed service and the underlying platform support, the task use reduces the dependency on multiple application frameworks. Having a managed service for that really makes things easy for highly responsive applications.
This will be useful for application developers for pushing tasks onto different isolated sub-systems. And also when different applications are talking to each other, having a task queue will push tasks to a different section of the application for asynchronous operations. Essentially, having a managed task queue with Cloud Tasks enables you to not really focus on the task queue itself, but on the outcomes instead.
Overall, that reduces the effort of developing the whole task queue pattern within your applications and it really helps to build out your application much faster. It’s another platform and a foundational tool that developers can leverage to make their applications more responsive.
Private networking connection for Cloud SQL
The private networking connection for Cloud SQL is not a new idea, but it is something that has been requested for a long time. Cloud SQL has become really popular and the connection to Cloud SQL is usually through the GCP Cloud VPCs. Now you can also connect from your private GCE Instances - the private virtual cloud - and you can connect it to Cloud SQL.
You used to be able to assign a public IP address to Cloud SQL Instance. Now you can also assign a private IP address to your Cloud SQL Instance on your GCP VPC. Having that private IP address to connect to Cloud SQL makes it much more secure within the GCP environment because there’s absolutely no public way of getting there.
This essentially makes connections to Cloud SQL much easier and more secure. It lowers network latency because private IP networking offers lower latency than public IP networking, especially when all your applications reside within GCP’s different VPCs. And it improves network security because you do not have to have your services exposed to the public internet and deal with the risks that come along with that.
Announcing general availability of Cloud Memorystore for Redis
Finally, we now have general availability for Cloud Memorystore for Redis. This is something that has been in beta for a while and it is very exciting. Cloud Memorystore for Redis was previously available on GCP, but it was not general availability.
As we know, Cloud Memorystore is a fully managed in-memory data store for very low latency database access with all the data residing in memory. They have also added a couple of new regions which support Cloud Memorystore - I think those regions are Tokyo, Singapore and The Netherlands. They had only six other regions where this was available but now, with general availability, it’s available in eight or 10 regions around the world.
I’m really excited about this feature. It’s fully compliant with the Redis protocol and a lot of customers can quickly move their applications based on Redis on to Google Cloud Memorystore.
Cloud Bigtable regional replication now generally available
Bigtable regional replication is, again, something that we always wanted to have. It is now available within Cloud Bigtable within a zone
or
multiple regions within the zone.
With the regional replication, you can isolate serving applications for analytics and for cluster routing policies to provide each class of application for their own cluster. You can provide near real-time backups in case of zoning failures because you have that replication available within the region. You have improved availability, you have analytics and throughput for additional replica clusters which you can scale independently.
All of these things are really helpful because previously, without this feature, it was a little difficult to scale and have a higher dependency on high availability in between different zones. It really helps to improve some of the other SLA considerations that a lot of customers needed to have in terms of the Bigtable deployment.
Cloud Speech-to-Text and the general availability of Cloud Text-to-Speech
Speech-to-Text has been in beta and alpha for a long time but now, finally, it has general availability. Not much has changed, especially in terms of the API, but I think it just got out of the GCP’s internal product gates in terms of making sure that all the product features are there, operational, relatively bug-free and qualify for work availability means.
New Cloud Source Repositories features:
Cloud Source Repositories is nothing new. However, what is new is that they have completely revamped how Cloud Source Repositories work, as well as the user interface.
One of the most important new features added is semantic code search. I think this project was started by Google research when they found out that developers do a lot of code searches, and this would improve their productivity by about 14%.
So, essentially they revamped Cloud Source Repositories in order to make it a little bit easier to manage the code but also be able to search the code to find an exact piece of code or a code snippet that is the most relevant to a developer’s activities so they can code changes or perform other operations.
This is essentially the biggest addition that is there, apart from the UI which has a few more changes to make things easier and more intuitive for developers. From what I have heard, it is very similar to exactly what Google does on its own - internal Google developers use the same capabilities. This is just bringing Cloud Source Repositories almost at par with what Google does internally.
Cloud Inference API: uncover insights from large scale, typed time-series data:
This is one of the big things that came out of Google. This is a new cloud inference API which used to analyze very large datasets
.
This is yet another API that Google has enabled for doing a lot of efficient querying for time series data.
This is really helpful for doing any kind of inferences on multiple use cases. Some of the use cases are: retailers to analyze for traffic, for collaborative filtering, for IOT companies to do production predictions. We have been doing all of this but a little bit inefficiently. This Cloud Inference API just allows you to query a very, very large dataset in near real time in that processing mode so you can do all your inference calculations based on different time windows.
It just makes the whole process a little bit simpler so you could integrate it very well with your ML tools and ML offerings. It is still the early days of the Cloud Inference API and we look forward to seeing how it changes and what additional capabilities it will offer as it moves along the journey to general availabilities.
Other GCP platform updates:
- GCP Support models: role-based and enterprise
The GCP Support models is actually a pretty interesting use case and a pretty interesting model in which they have started to talk about support. They have broken out support into three different aspects because production workloads require a higher SLA than development workloads.
So essentially, for enterprises and anyone who is running their workloads on GCP, they divided what kind of workloads and support models you need for each of your different environments, and then they charge you according to those SLAs. So now you don’t have to pay for production-level support for all your instances on GCP. You only pay production-level support costs for the production environment. That is going to help with costs and aligning the SLAs across the environment.
- Cisco Hybrid Cloud Platform for Google Cloud: now generally available
The other update I want to mention was the Cisco Hybrid Cloud Platform for Google Cloud which is now generally available. This includes all the on-premises enterprise integration for Kubernetes, for Istio, for Service Mesh - this has all become generally available. We have talked about this in previous podcasts and it was one of the biggest announcements from Google Next in San Francisco. This is following the enterprise route that GCP wants to talk about going forward.
- Cloud TPUs in Kubernetes Engine
One more announcement related to Kubernetes is Cloud TensorFlow Processing Units (TPUs) which were available for multiple VM and compute options, is now available in beta for Kubernetes engine as well. If you are running TensorFlow applications and TensorFlow-based ML models using Kubernetes or in different containers using Kubernetes orchestration, now you have access to the cloud TPUs, as well. You can actually accelerate your whole ML process and accelerate your TensorFlow libraries using the TPUs.
The GPU update was the Tesla V100 GPUs which are also now generally available. This has been coming for a while. We previously had the NVIDIA P100 GPUs that became generally available a couple of months ago. Now the Tesla V100 GPUs are also generally available. It just gives more capabilities to some of the applications that are GPU-intensive and it brings GCP closer to being at par with Azure and AWS.
- Access transparency logs now generally available for six GCP services
The last announcement was related to access transparency. One of the key things that we had talked about in one of the previous Cloudscape podcasts was access transparency where you can see exactly what data had been touched by Google engineers, as and when you need support for issues, and get a proper audit of that. Now there are six GCP services which have access transparency enabled and are in general availability. So you can immediately start using access transparency, have all the logs and have all the visibility in production.
The six services are Google Cloud Storage, Compute Engine, App Engine or System Desk, IAM and I believe also Cloud EMS. I think this is just the beginning. There will be other aspects of cloud and other GCP services which will also become part of the access transparency functionality over time.
This is in general availability now, so you can use it right away.
Listen
to the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.