XenTegra - Nutanix Weekly
XenTegra will discuss topics surrounding Nutanix's industry-leading, 100% software-defined hyper-converged infrastructure to provide a single cloud platform that seamlessly brings to life your hybrid and multi-cloud strategy. Whether on-prem or in the cloud, you get unified management and operations with one-click simplicity, intelligent automation, and always-on availability.
XenTegra - Nutanix Weekly
Nutanix Weekly: Simplifying Access to Geo Distributed Object Data Using Global Namespaces
Federation is an exciting and important new feature that was added to Nutanix Objects in the 4.0 release (Spring 2023). It enables a global namespace to be created across multiple Nutanix object stores, even if they are thousands of miles apart in entirely different geographic locations. Buckets hosted by these different object stores then appear to exist within a single object store, offering a consolidated view of the data.
https://www.nutanix.dev/2023/07/10/simplifying-access-to-geo-distributed-object-data-using-global-namespaces/
Host: Philip Sellers
Co-Host: Jirah Cox
Co-Host: Ben Rogers
1
00:00:03.420 --> 00:00:22.199
Philip Sellers: Welcome to Episode 70 of Nutanix Weekly. I'm your host, Phil Sellers. I am a solutions architect here at XenTegra, and I'm happy to have 2 great Nutanix resources on the line with me today I've got Jirah Cox, and I've also got Ben Rogers.
2
00:00:22.540 --> 00:00:24.739
Philip Sellers: Ben, Jihra, how are you guys doing today?
3
00:00:25.190 --> 00:00:26.210
Jirah Cox: Good, thanks, Phil.
4
00:00:27.120 --> 00:00:29.059
I'm doing great, Phil, I appreciate it.
5
00:00:29.190 --> 00:00:34.900
Philip Sellers: So then you were just telling us before we started, you just getting back off a vacation. You've been out
6
00:00:34.900 --> 00:00:59.900
Ben Rogers: on the North Carolina outer banks, and pretty remote, I imagine. Pretty pretty, too. Oh, my God! You know, it's it's being a native from North Carolina. When you get out that far you know, it's hard to believe you're in the same State. It literally feel it has a very tropical island. Feel to it. But a great time man did a lot of stargazing. Did a lot of cool things with my step.
7
00:00:59.900 --> 00:01:12.130
Ben Rogers: daughter, and so being a with my granddaughter. So then it's really a good time. So glad to be back glad to have time off, and looking forward to seeing what a what the future brings, my friend.
8
00:01:12.270 --> 00:01:29.579
Philip Sellers: That's awesome. Yeah, gyra, long time. No, see, I I got to spend some time with Jarra in person. Last week he he was able to join us for our mid your company connect and great seeing you last week good to see you through the whole integral community man, super impressive.
9
00:01:30.260 --> 00:01:43.970
Philip Sellers: Yeah, we we appreciate it. Jyra did a a great presentation for our team. We're always learning when we get together. So it's great to have partners that build into us. And Gyrus always a great resource to to help us that way.
10
00:01:44.370 --> 00:01:55.379
Philip Sellers: so today's podcast episode, we we're talking about some new things going on inside of object storage from New York. So
11
00:01:55.500 --> 00:02:03.240
Philip Sellers: I don't want to steal gyrus. Thunder! But, gyra, you you brought this one to us. You want to lead off with a little bit of the news here.
12
00:02:04.030 --> 00:02:09.160
Jirah Cox: Yeah, man, certainly not. Not my thunder to steal, but definitely a super exciting announcement.
13
00:02:09.199 --> 00:02:21.379
Jirah Cox: Coming out with our objects. 40 release, which is out out now, And full credit to Steve mackerel. That's who wrote this. He's one of our tech marketing engineers.
14
00:02:21.380 --> 00:02:44.600
Jirah Cox: which, if you don't know our our Tme team. That's like, 90% marketing. there's a 90% technical 10% marketing. Thank you. Got that backwards. and for this on the tentacle there. With a little bit of marketing sprinkled on So the teams, the teams fantastic, start to finish With Steve's real joy. He's one of our global object specialists.
15
00:02:45.330 --> 00:03:07.260
Jirah Cox: which is fitting because this release brings global namespaces. Right? So we can now take mechanics objects right, like our S 3 on Prem, you know, data in buckets as a web technology type of offering. And now unify that globally. Right? So you can have a global namespace. This has been a long time. Ask for a major customers to say.
16
00:03:07.540 --> 00:03:18.230
Jirah Cox: Hey, this is cool, that I can have a Raleigh or a Charlotte or a Seattle deployment for this. But I'm a global company, right? How do I really go big with this technology? And this is how we get there.
17
00:03:18.830 --> 00:03:21.530
Philip Sellers: So I have to ask, I mean, is this.
18
00:03:21.630 --> 00:03:36.350
Philip Sellers: for for technologists that have been in, you know, infrastructure for a long time. Sometimes object storage can be a little confusing. So is this sort of like Dfs namespace inside of a windows a D network. Is is it similar to that
19
00:03:36.710 --> 00:03:42.380
Jirah Cox: man that's interesting. That takes me back because I was thinking about some other ways to to talk about this
20
00:03:42.430 --> 00:03:55.110
Jirah Cox: as we get into this and some analogies to bring. I was guessing to talk about dns and domains versus like forests, and I think both are totally applicable right? It's like there are resources that I want to consume, and it may be maybe in control locally.
21
00:03:55.250 --> 00:04:14.419
Jirah Cox: but also access globally, right or able to access a second copy of them elsewhere. Right? So I think I think both those analogies really kind of come through for sure, like, dfs, yeah for smb, right? as a collection of like Sme stores that may or may not have the same contents.
22
00:04:14.490 --> 00:04:20.820
Jirah Cox: And yeah, this is a similar construct of global news faces for object stores that may or may not store the same contents
23
00:04:21.860 --> 00:04:36.030
Philip Sellers: awesome. So the introduction here it talks a little bit about Federation and and you mentioned this is new and objects for that. Oh, So from a a versioning standpoint,
24
00:04:36.330 --> 00:04:58.380
Philip Sellers: it is that tied to a particular Aos release is that a particular version that that we have to be at, or how independent, I guess, is, is the Newtonx objects from without memorizing them. Compatibility matrices. I think it's mostly independent from Aos, from prism element, for sure.
25
00:04:58.650 --> 00:05:16.270
Jirah Cox: objects does run within present central. That'd be the only thing right? So you're probably on the latest prison central for almost all customers, because that's the multi cluster management construct And so there's always goodness coming to that present central release, and that usually gets you on the right path to run the latest outics. Release as well
26
00:05:17.230 --> 00:05:26.949
Philip Sellers: Ben, as you're talking with customers and stuff. How many conversations. Are you having? Kind of around new tanks? Objects? I mean, what? What are customers talking to you about?
27
00:05:27.290 --> 00:05:50.280
Ben Rogers: Well, they're they're talking, you know, what's really interesting about it is, customers are wanting to look for ways to consolidate their object storage, especially when they're looking at cloud services, multiple cloud services. And so this is really good talking point for us, because we can consolidate all that form in one platform, and we can extend that from on Prem, which is where their traditional footprint is
28
00:05:50.280 --> 00:06:15.279
Ben Rogers: out to their new cloud footprints and really make it look the same across the environment. So we're having this conversation. I will be the first to admit that I'm coming up speed on this technology. I know, man, we've done a lot within us in the last 6 months. So I'm a little bit more on the customer side of this, where I'm looking at somebody like Gyro one. Educate me. How can I utilize this in my environment, and what is the best way for me to take advantage of this?
29
00:06:15.280 --> 00:06:24.499
Ben Rogers: So again, the Dfs, you know, the comparison was good for me, because I did run that. But but I'm a little more curious of, okay.
30
00:06:24.640 --> 00:06:40.540
Ben Rogers: I've got a global environment. I'm spread out across my environment. How do I ensure that. You know things are gonna drop where they need to go if I implement this Federation format. But yet I still need my silos for geographic regions if you would say.
31
00:06:42.680 --> 00:07:05.469
Jirah Cox: that's a great question. Right? So let's go. This article takes this for granted and skips over it. So let's go deeper off script here. But person like, why would I use this? Right? So the 2 big like used cases right? That we're seeing for object storage, and then, like, sort of next gen cloud, native application storage. And we can get into both of those for backups. Right?
32
00:07:05.470 --> 00:07:27.739
Jirah Cox: That's where we're helping customers move off stuff like basically traditional back of appliances where ultimately it was just a big piece of sheet metal, maybe with some disc shelves attached, but it is offering like an Smb share out to the network. And then, you know, back up applications. We're simply writing to that share. It was a giant, big almost like a storage array. But maybe even sometimes single controller.
33
00:07:28.100 --> 00:07:29.400
Jirah Cox: big pile of data.
34
00:07:29.630 --> 00:07:48.770
Jirah Cox: right? And that's fine. Often, that's a you know, price per pound. Kind of use case right? It's cheap and deep. and that's all fine as well. But the biggest sort of management pain came in with the lifecycling of that right like, how big can I get if I want to cross a certain threshold? How do I go and scale beyond that?
35
00:07:48.810 --> 00:07:59.980
Jirah Cox: Even if I don't scale in 5 years, 6 or 7 years, or whatever my financial schedule is, how do I get off of that? Go on the next one, right? And sometimes there's answers for that. And there's services for that. Sometimes there's replication tools for that.
36
00:08:00.210 --> 00:08:06.840
Jirah Cox: But typically that refresh was at least some amount of pain or management, or hassle or distraction from running the business
37
00:08:07.560 --> 00:08:17.249
Jirah Cox: with Newtonics. Right? We're now, of course, we can also still do Smb shares. But really the next Gen. Web scale way, to think about backup targets
38
00:08:17.310 --> 00:08:33.340
Jirah Cox: is with S. 3, right as the protocol right for storing backup data in object stores. This is where. Look at all a lot of your vendors right? Like your rubric, your cohesity. it's now natively in beam. It's in common vault, even if you have some local storage, maybe your last 14 last 30 days.
39
00:08:33.500 --> 00:08:42.399
Jirah Cox: It's sort of becoming just broadly accepted. That long term storage goes outside of that Smb, or I'll say, appliance, manage storage.
40
00:08:42.570 --> 00:08:48.729
Jirah Cox: and gets pushed out to S. 3. It sort of always assumes to be the cheapest of the cheap, the deepest of the deep.
41
00:08:48.930 --> 00:08:59.000
Jirah Cox: And also when you get there, you get fun capabilities, right stuff like worm, right? Stuff like archive retention lock, so that I get even more protection of the business against, like, say, like accidental deletion, or even malicious deletion.
42
00:08:59.240 --> 00:09:04.120
Jirah Cox: So then I get even more of a security blanket wrapped around my backup storage, and that's awesome
43
00:09:04.340 --> 00:09:10.029
Jirah Cox: should be moving in theory to a lower price per gig as well. So that is a win on the financial side.
44
00:09:10.740 --> 00:09:30.010
Jirah Cox: But then the sort of mechanics. Yes, and to all of that is the life cycling, right? So you could deploy like a petabyte, you know. archive today, running an s. 3 run that for 5 years, 6 or 7 years the hardware itself has lived its life. It's time to to leave the data center, bring in new hardware. Maybe it's twice as big right? Maybe now I have twice as much capacity.
45
00:09:30.600 --> 00:09:34.039
Jirah Cox: This is where the newtenance difference really applies. I bring in the new nodes.
46
00:09:34.370 --> 00:09:54.619
Jirah Cox: put them in the data center, expand that cluster to. Now bridge the old of the new hardware, eject those old nodes, and I've done 0 tuning, revisiting, we swizzling migration. Nothing right, the application that is writing to it, your beam, your rubric coesity, whatever come all doesn't even know that I made a change.
47
00:09:54.750 --> 00:09:57.280
Jirah Cox: But I just re I just re-plat from that entire
48
00:09:57.300 --> 00:10:00.469
Jirah Cox: ball of data without having to do anything, any real work.
49
00:10:03.090 --> 00:10:09.600
Philip Sellers: and that to me makes it feel like, you know, at a platform
50
00:10:09.670 --> 00:10:21.290
Philip Sellers: standpoint, you you've got those capabilities. But now you've brought that global namespace to it as well. So now you've got. You know, this object store, which can live forever
51
00:10:21.680 --> 00:10:32.929
Philip Sellers: can be upgraded can change its pro performance characteristics. And I don't have to update a single pointer or anything. I've I've effectively got the same
52
00:10:32.940 --> 00:10:39.829
Philip Sellers: same place to right forever and ever. Amen.
53
00:10:40.160 --> 00:11:01.009
Jirah Cox: probably has a use case for for backup and archive use cases as well, but it is much more in my mind, aligned with like a a cloud storage for applications kind of use case. Right. So if I'm putting out an application, whether that's internal or external doesn't matter. S. 3 is an authenticated protocol that can go inside the firewall as of the firewall. Either one's fine there.
54
00:11:01.330 --> 00:11:10.300
Jirah Cox: If I'm putting on an app or I'm distributing data internally. I had one customer who was distributing these 3D. 3D assets right to all their various global offices.
55
00:11:10.400 --> 00:11:15.840
Jirah Cox: or they wanted to sort of like compile, render them in one place and then consume them in a bunch of places
56
00:11:15.920 --> 00:11:18.020
Jirah Cox: with this kind of access. Now.
57
00:11:18.130 --> 00:11:31.389
Jirah Cox: we can have a a solutions, basically like an on Prem Cdn, where I can put them in one location, replicate them to a bunch of other locations, and all those locations can pull them down locally, right saving on when bandwidth, increasing performance, increasing speed.
58
00:11:31.410 --> 00:11:42.820
Jirah Cox: But if their local one was down for some reason they can also get elsewhere to retrieve that same data where it's stored elsewhere. That's why, I say it sort of like on Prem, Cdn, content delivery network type of outcome
59
00:11:42.910 --> 00:12:01.060
Jirah Cox: that we can now offer as one global federation or multiple federations. Even right? So I can have. you know, a whole bunch of I could have 30 offices right? And say, some are my, you know, accounting Federation. So my, you know, development out Federation, right? And I have those all different and managed within new tanks.
60
00:12:02.140 --> 00:12:12.109
Philip Sellers: Yeah, that that makes a lot of sense to me. And and as I think, about cloud native applications, you know, coming from an organization that was trying to move
61
00:12:12.270 --> 00:12:37.330
Philip Sellers: into more microservice and and cloud based technologies. You know, one of the things we were thinking about was, you know, pods of our application all around different geographies. And this seems like a huge enabler as we adopt. S. 3 type buckets in the application to now, then, have that data transport with us to each of those regions too
62
00:12:37.630 --> 00:12:39.230
Jirah Cox: totally
63
00:12:39.330 --> 00:13:01.430
Jirah Cox: I can think of a lot of use cases for like edge sites, I mean edge sites, of course, still need backup but they might even do like local video storage or whatnot. And I can then use that to replicate. Like many, the one, maybe I bring a bunch of remote sites back to one Hq data center grade site. But then I make them all look like one giant happy data storage, you know, family through a Federation as well. So it's really really pretty cool.
64
00:13:01.460 --> 00:13:27.929
Ben Rogers: So I can think of a a customer that we've been talking to that this will hit the mark right off the back. They they're running manufacturing, but each plant does different manufacturing, and they have to have different Logos for their packaging for that factory. But they want to have a unified space. Well, this will do exactly what you're saying. They can have everything pretty much housed in the data center, and then the pieces they need to have on the edge. They can either replicate those out or whatever technology.
65
00:13:27.930 --> 00:13:51.160
Ben Rogers: But when you're looking at their tree it'll all be, you know, Company Xyz, and then they'll have different locations. And those locations will have the specific artwork and everything that they need for that manufacturing facility. So this is gonna be great for that process. Because now I can go back and go, guys, we can make it look like all one tree. It can be, you know, Company Xyz, and it cut under Company Xyz. We can, did
66
00:13:51.160 --> 00:14:00.790
Ben Rogers: replicate portions of this out to the edge that you need is still have centralized storage back in the data center. So really cool, man, I look forward to having that conversation with our customer.
67
00:14:01.520 --> 00:14:20.369
Jirah Cox: Yeah, that sounds really cool. Actually, I can picture some like even like automated site deployments right where I just pick like, is this gonna be a red site or a blue site that I can deploy it, automate the full stack and then know that it's gonna get access to that same data set even immediately, right? And then also replicating over time. So it gets even more locality, and cache is closer to where the work is going to happen.
68
00:14:20.760 --> 00:14:42.869
Ben Rogers: And then I think it's cool. What you mentioned gyro which goes back to the just basic goodness of Britain. It's the ability to inject and eject nodes as the environment needs to grow upgrade. All of those things you really do forget when you talk about a service like this is built back on that platform and foundation, that we're always talking about the ease of use, flexibility, simplicity.
69
00:14:43.360 --> 00:14:54.949
Philip Sellers: That's the enabling factor here. And and I think that's that's the underscores as it develops as mechanics develops this platform. I mean, it's it's built on that enabling
70
00:14:54.960 --> 00:14:56.550
Philip Sellers: core technology.
71
00:14:57.040 --> 00:15:14.690
Philip Sellers: So I I do want to spend some time kind of peeling the onion a little bit and and talk about how we get into this architecture. This Federation you know. The next section of the blog post goes into it a little bit more. But, Jerek, can you explain a little bit from an architectural standpoint?
72
00:15:14.830 --> 00:15:29.209
Jirah Cox: Yeah, totally. So. And there's a button, and there's a there's gonna need. If you're more visual learner than like auditory, you can watch this all in a quick little Youtube video on our Youtube channel as well. But when you go to create a new object store Federation.
73
00:15:29.310 --> 00:15:39.650
Jirah Cox: Right? Then you get prompted to say, Well, what should be inside of it. And so then that's simply a matter of picking. You can do as few as one, but really like, I think 3 is recommended for sort of geodiversity and resiliency.
74
00:15:39.740 --> 00:15:41.190
Jirah Cox: Pick 3 different
75
00:15:41.220 --> 00:16:10.500
Jirah Cox: object stores, right? That then serve as like core members, right? And we call them core members, because that's where they they're going to store some additional data that really is like metadata of like, what is the Federation shaped like? And who else is in it? and of course, you know we like. If you've seen our cluster minimums, right? We like to do like minimum 3 of something, so we can still lose something and still have full continuity as well. So 3 is a good, a good starting point for scale out systems. And and we see that here as well.
76
00:16:11.040 --> 00:16:20.290
Jirah Cox: So there's sort of 2 components to it. Right? There's the metadata service, which is really tracking like who is storing what data and where and what is the shape and size of a given federation.
77
00:16:21.030 --> 00:16:27.600
Jirah Cox: And then, also, more importantly, those controllers. Right? So to think back to like our Dfs analogy sort of
78
00:16:27.620 --> 00:16:32.379
Jirah Cox: dfs could be like, I can go to a server in query like, Where is this file share?
79
00:16:32.510 --> 00:16:44.879
Jirah Cox: And the answer could be, It's here or it's over there. But I get the sort of traffic cop effect of I'm directed to. Where I, the resource I want to get to is located. Same thing here. Right? So the Federation Controller knows
80
00:16:45.050 --> 00:16:58.000
Jirah Cox: what lives where and then depending on whether whether I want to. Do you know? a discovery action, or actually like a crowd like create rename update, delete action. It'll tell me where to go to get to, to make sure that that happens.
81
00:17:00.510 --> 00:17:11.950
Philip Sellers: That makes sense. again. Yeah. I mean, high Residency. High availability. built into the product thought out from the beginning. So it, it's
82
00:17:13.130 --> 00:17:41.040
Philip Sellers: It's sounds like it's enterprise ready from the beginning, which is another Newton extent. I mean, as you talk about the commonality between, you know, things like your core number of of Newtonics. Dsf nodes. you know, files notes in the multiples of 3. Yeah, I mean it. It makes total sense. I think you'll see here from the beginning, right is, it's it's built to be cross from central right. So no need for this all to live
83
00:17:41.040 --> 00:17:58.929
Jirah Cox: in one person, central, in fact, the opposite. Since prism central usually is a let's call it a regional or availability zone construct. It's actually expected that you're probably going to cross that probably from day one. So you can have these object stores that are all federating together across multiple prison central. And that's expected and even encouraged.
84
00:17:59.010 --> 00:18:12.070
Philip Sellers: Yeah, that that's important to call out. I noticed that in the diagram. But it it's even more important to to point out, because we we typically think of prison central as that roll up point. But this is bridging across those.
85
00:18:12.670 --> 00:18:28.579
Philip Sellers: so the the relationship to availability zones also kind of called out here. you know it. It's a and can you tell us a little more about that relationship to the availability zones and and kind of how to think about it
86
00:18:29.490 --> 00:18:51.920
Jirah Cox: totally right? So and even, and even it highlights here in the article that when you when you do that, multi PC deployment, one cool thing it does is that, of course, in the construct of of object storage right in the S 3 protocol, the access keys are actually sort of generated by the cluster owning the data. Right? So like you're fill up from Gyra, he's been
87
00:18:51.950 --> 00:18:56.110
Jirah Cox: our usernames don't matter whatsoever to the data, right? The object store really generates.
88
00:18:56.150 --> 00:19:05.159
Jirah Cox: the access keys, both the random random string of numbers and letters. That is, quote the username and also the password. Right? That's all just sort of
89
00:19:05.340 --> 00:19:30.150
Jirah Cox: dictated by the Audrey store. But the cool thing is that when you configure this replication, the the we we reference it here as the I am. Replication actually handles all that as well. So that that means now my keys can go anywhere to any object store, even though they were created at the. You know the Raleigh object store. I can use them in Charlotte or Seattle, or Tokyo, or anywhere else anywhere else. Those access keys now are.
90
00:19:30.350 --> 00:19:40.939
Jirah Cox: What's the right word here they can be granted access to objects and resources elsewhere, even though my keys might live in a different more created in a different store.
91
00:19:41.550 --> 00:19:46.390
Philip Sellers: Then you're gonna have to watch out. I've got your keys now, so might come after your car.
92
00:19:46.660 --> 00:20:04.490
Ben Rogers: So I I was not worrying about this again. I want. I'm looking at this from the customer point of view. So what you're telling me is that if I've got multiple object stores out there that are being managed by multiple prison Central's, we now have a way to kind of bind all those prison central and stores under one Federation.
93
00:20:05.760 --> 00:20:16.260
Jirah Cox: Yeah, for the 100 for the for the object side of it. Right? The S. 3 part of it. We've always had solutions for like vms and and file shares for, like. Smb, all that good stuff, this one's now.
94
00:20:16.330 --> 00:20:27.479
Jirah Cox: with respect to the objects protocol right? Yesterday. Protocol. Yeah. Those keys that you generated in any one of those sites can now gain portability in that Federation to go anywhere to any other object store
95
00:20:27.620 --> 00:20:31.580
Jirah Cox: which isn't explicitly granting access. But now there are. They are grants of bowl.
96
00:20:31.770 --> 00:20:32.580
Philip Sellers: Hmm.
97
00:20:33.310 --> 00:20:35.199
Jirah Cox: all right. So basically,
98
00:20:35.270 --> 00:20:52.259
Jirah Cox: you know, to user, continue using our Dfs analogy, which at some point, if you, if you really know s. 3, you probably are rolling your eyes at us. But if you don't this is probably helping right with that Dfs Federation right now. My my username password for one file server now works at the other one. If I've been, if I'm in the apples for that one
99
00:20:54.910 --> 00:21:18.939
Philip Sellers: so super important to we, we have to be able to see what's going on inside of it. So new tax thought about that. So we've got new views inside of Prison central for Federated namespaces as well. you know, the article goes in. We've got a nice screenshot. And I know a picture is worth a thousand words. So you can find the link in the description for the podcast but
100
00:21:18.940 --> 00:21:27.289
Philip Sellers: Kara, I mean what what's important for us to know and see in terms of object stores and and watching and and scaling out.
101
00:21:29.540 --> 00:21:43.540
Jirah Cox: If I think about this, for like, if I had to run this environment, right, what do I care about the most? And and you're right. It's on. It's on this week's edition of let's describe to you a screenshot in in words. Right? You know what, I'm sure if I created right, where are they located?
102
00:21:43.620 --> 00:22:12.360
Jirah Cox: you know, how big are they? Right? What? How much data am I storing in each one? That's super important. How do I get there. Right? So all of them have, of course, local IP addresses per site for accessing, and we'll get. We'll come back to Ips and accessing that way later on, and and probably also the last thing I want to know in a new tennis parlance is like which PC. Is managing them right, like sort of which also sort of hit touches on which availabilities are, are they a member of? Is that which is usually, you know, often regional, or like a continent, perhaps.
103
00:22:13.290 --> 00:22:28.699
Philip Sellers: and so you know, I mean I I think we we may take it for granted. but there's object stores, and then there's buckets, and and so a little bit of the screenshot here deals with the number of buckets that we've got and how many objects are in things. So
104
00:22:28.780 --> 00:22:37.739
Philip Sellers: I for someone approaching object store? New, I mean, how do you describe the difference between an object store, an object and buckets?
105
00:22:38.070 --> 00:22:49.820
Jirah Cox: Hmm! Someone could do a better job with this, but I would think it's it's a fair analogy to say that you can think of an object store kind of like a file server and a bucket like a share. Right? So it is a container.
106
00:22:50.270 --> 00:22:54.580
Jirah Cox: possibly one of several right, probably one of several on the logical object.
107
00:22:55.960 --> 00:23:20.560
Jirah Cox: and then an object would be analogous to a file right? And then, of course, you have the buckets. Then hold the outcome object themselves. which is the data you're storing right? Whether that is, you know, if you are running a social media site right every time someone uploads a new avatar or attaches a picture to a post. All those things are probably going to land somewhere on some flavor of S. 3 storage on the back end.
108
00:23:21.440 --> 00:23:30.279
Philip Sellers: so hopefully, no one punches us for that analogy. But I I think it's helpful for for infrastructure folks who, you know.
109
00:23:30.400 --> 00:23:34.300
Philip Sellers: approach to different kind of a problem, and if they ask
110
00:23:34.720 --> 00:23:48.310
Philip Sellers: The next section really talks us through how clients, requests or accesses. All right. I'm not sure that was English. Let me back that up and try to get how client requests are routed. I don't know what I said the first time.
111
00:23:48.660 --> 00:24:16.330
Jirah Cox: And we probably don't have to step through the datapath like call by call, I should we should point out we'd even say, this is on the new tenx developer blog, which is really cool. These are kind of more technical posts in nature. we sort of. We sort of pick and choose as we feed the blog engine right around the corporate phenix blog on the clinics.com the developer, blog on it to that dev which is where we are today, and also the community blog as well. We've seen some great stuff
112
00:24:16.330 --> 00:24:21.599
Jirah Cox: about Newtonics on, say ovh, or some folks that have put together some great blog posts on
113
00:24:21.600 --> 00:24:31.220
Jirah Cox: out of their home labs. So we have a a wonderful place, a a number of places we can pull from today. We're on the on the deaf site. So there is a a couple of very highly detailed paragraphs here
114
00:24:31.220 --> 00:24:43.079
Jirah Cox: around, how do calls work? I think for podcast level specificity. We can just say, guess what it works. There are a couple of actions that are more important, that do get right out through those core members.
115
00:24:43.210 --> 00:24:44.869
Jirah Cox: When I need to do stuff like
116
00:24:45.120 --> 00:24:59.369
Jirah Cox: I don't know to use some more terrible analogies. Right? The the S through Covent of like looking up a phone number in a phone book. But then, when I make that phone call, it can be either local kids. We used to have these things called phone books. They were printed out
117
00:25:00.090 --> 00:25:11.299
Jirah Cox: But when I make the phone call right? That it might be local, it might be remote. But my, my, my conversation gets where it needs to to either do the put or the get request to either place or retrieve new data.
118
00:25:11.960 --> 00:25:24.190
Philip Sellers: And I think that's important, too. So I mean, if you think about S. 3 as a protocol, you know, it's a little different. it's more like what you get with Http. Calls from a web browser
119
00:25:24.240 --> 00:25:41.019
Philip Sellers: as opposed to. You know your traditional smb, so you you've got get input, which are the same verbs you have from a web browser. Same thing that you would do with any Api So you know, definitely, very much cloud native technology.
120
00:25:41.030 --> 00:25:49.690
Philip Sellers: Yeah, I I was looking through these steps. And I was happy to hear you say, Let's don't go step by step.
121
00:25:50.230 --> 00:26:05.460
Jirah Cox: It's not to put people to sleep on the podcast I'm yeah. If you were to get in today to listen to the step by step. Handshake of how does my S. 3 request get looked up and then honored? I'm so sorry we disappointing you. but we do it to stay awake and not crash in traffic.
122
00:26:05.470 --> 00:26:28.529
Jirah Cox: But it's a great point. Follow up so because it's web native technology. Right now, we get to bring in really cool stuff that we sort of never had back in to stick with the analogy back in like the Dfs days of like. How do I put a load balancer in front of this? How do I do? Geo. Routing with that conversation around the load balancer, right? Like, if I work in the Budapest office, I want to retrieve data
123
00:26:28.580 --> 00:26:37.749
Jirah Cox: as closely to where I'm sitting as possible. Right? Maybe that's some more local or in a neighboring country, but not like have to go across the world to pick something up if it if it is closer.
124
00:26:37.970 --> 00:26:43.330
Jirah Cox: So with Gslb, which is kind of a I think, vendor agnostic term what global
125
00:26:43.760 --> 00:26:47.279
Jirah Cox: service load balancing? I don't. I know 3 of the 4 letters.
126
00:26:47.630 --> 00:26:51.030
Jirah Cox: I think it's global server live balancing.
127
00:26:51.450 --> 00:27:05.420
Jirah Cox: So basically, it's a way for you know, using basically some health monitoring Dns trickery, some, some perhaps waiting of records that I can put in as the admin. I can say when you ask where something is. If it's close to you, we'll send you some more close to you.
128
00:27:05.560 --> 00:27:13.719
Jirah Cox: but if that's down or unreachable or having a bad day, or whatever, we'll go somewhere else. So you still get your request honored.
129
00:27:13.750 --> 00:27:34.539
Jirah Cox: maybe not as close as it as ideally bright, but it's sort of a at way way of adding some awareness of data, health, and integrity to the conversation. So with this. Now we can do fun stuff like when I want to retrieve a a 3D asset brand, maybe from this bucket on my on. Prem, you know, Cdn, or restore from a backup right for my backup. Now, Federation cloud in privately.
130
00:27:35.080 --> 00:27:38.190
Jirah Cox: I can now get that from the closest location where it's available
131
00:27:38.210 --> 00:27:44.369
Jirah Cox: or somewhere farther away, if it's not replicated locally close to me. similarly, when I want to do
132
00:27:44.380 --> 00:27:52.970
Jirah Cox: a data right? Right? Maybe if I'm one of these artists generating 3D assets. I might have 3 offices around the world where I can do
133
00:27:53.220 --> 00:28:02.600
Jirah Cox: that the put right, the save action, and that will be replicated to 30 other offices. That can all do the retrieval right. I can set up that that replication in lots of fun ways.
134
00:28:02.620 --> 00:28:09.940
Jirah Cox: so because it's a web. Native technology load balancers eat this for breakfast. Right? They can do it very, very capably, right from the get go.
135
00:28:11.390 --> 00:28:32.559
Philip Sellers: you know the the next thing you know, if I'm a customer who already has object store and and is thinking about Federation and and hey, this really, really me to my use case. What happens to my local namespace? what? What happens as I federate? you know that that's the next concern. I think that that comes up.
136
00:28:32.900 --> 00:28:46.860
Jirah Cox: Yeah. So it's 6 rounds. and we show in the in the blog post here. Right? So the every Tenx cluster running. The object service includes the object browser right? Which is not the only way you can manage it. Of course you can get there with any s. 3 client.
137
00:28:46.970 --> 00:28:54.430
Jirah Cox: but we also do include one natively that runs within your browser, and as part of that you can browse the local namespace which doesn't go away.
138
00:28:54.480 --> 00:29:18.179
Jirah Cox: or the parts of it that also are holding federated resources as well. So it sort of becomes Oh, man, I almost I almost genuinely and not ironically, set up prism. to not not to be on brand, but to show that it can show like multiple facets of what's going on there right? The local and the remote maybe it's more like a split telescope. I think prism actually is is close enough to being on brand.
139
00:29:18.680 --> 00:29:43.660
Philip Sellers: Well, that's good. And and also says, any of the existing that is going to stay. So it's non disruptive to federate, which is also a great thing. It's not like you destroy anything as as you adopt Federation, you know, in fill of your your your Dfs analogy is getting frustratingly more and more useful the more we go on. probably most most frustrating to people that are are more like
140
00:29:43.660 --> 00:29:54.700
Jirah Cox: I can get to the share through the local file server, or, of course, through the Dfs again, namespace right? Like, it's more like logical traffic routing than it is.
141
00:29:54.700 --> 00:29:58.339
hard plumbing. Right? So to your point. Yeah, of course, like no data
142
00:29:58.340 --> 00:30:04.799
Jirah Cox: no data churn as a result of the Federation. unless I can figure it right, like migration or application.
143
00:30:05.050 --> 00:30:06.000
Philip Sellers: Yeah.
144
00:30:06.340 --> 00:30:24.109
Philip Sellers: And what's great in the the article here, we've got good screenshots, which, of course, listeners can't hear But if you happen to be watching on Youtube. You see it on screen. you've got different tabs associated with the type of namespace, whether it's local or federation that you're participating in.
145
00:30:24.370 --> 00:30:29.419
Philip Sellers: And it looks like, here, we can actually participate in multiple federations. Is that true? As well
146
00:30:29.500 --> 00:30:39.610
Jirah Cox: sure can. Yeah. So so you know, the Federation is This is the sound of me thinking out loud. It's like a user, definable
147
00:30:39.620 --> 00:30:58.030
Jirah Cox: grouping, right? So like, there's no requirement that like, just because it runs the objects, you know, service on one at the next cluster that it has to be in all federations or none like I can pick and choose. I want this object store to be part of that one, but I can deploy it. I've already store that's part of a different Federation as well.
148
00:30:58.030 --> 00:31:15.209
Jirah Cox: I can think of use cases for that, for well, even for for use cases right like backup versus Cdn. Cloud, native application, storage and retrieval, or also so stuff like even like multi-tenant right? Like, I got 10 at Federation, 1 10 at Federation 2. And those are fully separate.
149
00:31:15.460 --> 00:31:16.890
Jirah Cox: you know, placements of data.
150
00:31:18.330 --> 00:31:26.739
Philip Sellers: Some. Ben, is you listening to this? What? What questions do you have? I mean, what what do you think? customers should be asking about.
151
00:31:26.890 --> 00:31:38.209
Ben Rogers: So I mean the the separate Federation things kind of got me interested in it. And I'm thinking of this mainly from mergers and acquisitions. So if I had started out with
152
00:31:38.370 --> 00:31:44.739
Ben Rogers: a parent Parent Federation, but I had to bring another Federation on in an M. And a situation.
153
00:31:44.890 --> 00:31:56.970
Ben Rogers: You lost me a little bit of like. How would would it be to Federation zones going to the same storage? Kind of be? Explain that a little better to me. I'm a little lost with that concept there.
154
00:31:57.060 --> 00:32:00.739
Jirah Cox: so could be either one really been right? So I
155
00:32:01.550 --> 00:32:25.020
Jirah Cox: we should caviar or talk about new tanks governed. S. 3. Right? There's ways you could certainly ingest data from like any other s. 3, whether that's public cloud or any other kind of on from s, 3. Solution. but yeah, to your point. We certainly can. use federations for like a data migration use case right? so that the the object store can live elsewhere. But then we bring it more into the family, right for governance reasons.
156
00:32:27.920 --> 00:32:40.280
Philip Sellers: And it looks like you. You've thought a couple of steps ahead. I mean, you know, migrations Federation Federation with fault, tolerance and and migration is also capability.
157
00:32:40.370 --> 00:33:04.559
Jirah Cox: yeah, I mean, there, I think the underlying thing is, there's a lot of things to think about as Federation came to life. talk us through a little bit of thought, tolerance and and migration use cases. Yeah. So so you know, when I've talked to this about this in the past, there's been that desire and some of those we solve for earlier, even before Federation But some things get even easier with Federation in play now
158
00:33:04.560 --> 00:33:11.649
Jirah Cox: of like great. If I have my, you know, La and New York data centers. And I can have the same data in each one
159
00:33:11.650 --> 00:33:36.900
Jirah Cox: or I want as same data in each one. And I, wanna you know, lose power to one site or lose my landlink to one of these data centers and everything. Just keep on working right? And then with that, like we talked about before like a Gslb or a Dns based direction, because I can always. I can also do it without Gslb, if I just simply want to sort of micromanage. My Dns records that are recite. I'm running site aware, Dns, servers that can work as well.
160
00:33:36.920 --> 00:33:49.270
Jirah Cox: I can now say, hey? The same data is in La or New York. Go to either data center right? Or go to whichever one is is closer to you, or which which are ones alive. So now I've gained site fault tolerance for that data set for that application for that use case
161
00:33:49.760 --> 00:33:53.729
Jirah Cox: like like Ben touched on.
162
00:33:53.760 --> 00:34:12.270
Jirah Cox: I could have a federation that includes La in New York, and a bucket starts life off in one of the other. But then we actually decide that. hey? This is, a very, very latency. Sensitive application. Move that bucket to the other one right where maybe the application is running. So move moving within a Federation
163
00:34:12.270 --> 00:34:26.159
Jirah Cox: or moving across Federation, or I can move from like to that local namespace into a federated namespace right? Saying, this used to be site local and by design. But now we want to make it actually more more available as part of the Federation as well.
164
00:34:26.489 --> 00:34:28.530
Philip Sellers: Yeah, that's awesome.
165
00:34:28.620 --> 00:34:39.680
Philip Sellers: you know, we keep talking about the global server load balancing? you know, the next section actually talks specifically about that with Geo distributed?
166
00:34:39.770 --> 00:34:57.260
Philip Sellers: is. Is this a third party? Gslb, that we're we're kind of talking about? And you know. So it's a separate appliance or application. That Scalar, you know I I can't even think f 5. Definitely. Those are the big 2 right in this scalar of 5. I mean, you could probably write it
167
00:34:57.260 --> 00:35:12.130
Jirah Cox: yourself with, and genetics with a whole bunch of of native web scalar. whatnot. What's your web server? scaling technology, but sure, certainly your turn key off the shelf. Commercial solutions will be stuff like that scalar stuff like a 5.
168
00:35:12.650 --> 00:35:20.870
Jirah Cox: but yeah, all of that. So there are. That is the mostly client facing load balancing. This is like when I want to go to.
169
00:35:21.140 --> 00:35:35.220
Jirah Cox: you know, objects Federation. O one.kentoso.com. Where is that thing, and it knows where that thing where those things are, and it directs you to the the closest location for that, or if if one sites down it directly to a surviving site, right? So that you still get
170
00:35:35.300 --> 00:35:42.690
Jirah Cox: so good service. There there are, and we have a caveat here in the article that there are some layers of load balancing so that we we use like envoy
171
00:35:42.860 --> 00:35:48.710
Jirah Cox: closer to the metal, right? So, though I like everything we do in tanx. This runs as a as a virtual machine runs as a Vm.
172
00:35:48.890 --> 00:35:52.490
Jirah Cox: And so, fundamentally, if you pop the hood.
173
00:35:52.510 --> 00:36:02.549
Jirah Cox: you know, objects to the service that runs on one or more prison element clusters right? The cluster literally just a collection of of nodes that do compute for virtual machines as well as storage.
174
00:36:03.030 --> 00:36:27.460
Jirah Cox: as multiple vms, right? I think your default sort of like running the bill out of the box. Obvious deployment is like, I think it's 4 worker vms, and 2 load bouncing vms. Last I looked. so there are some load balancer, you'll see like dash. Lb, in the Vm. Name that's going to run on, boy. But those that's doing low bouncing for us right? Oh, which Vm is governing this V disk, or this data store or this data placement
175
00:36:27.600 --> 00:36:34.569
Jirah Cox: that's at a lower level than than the customer facing. And to your point, Philip, customer provided Gslb fabric.
176
00:36:35.200 --> 00:37:02.440
Philip Sellers: Yeah. So I think that's a a huge distinction to make. You know, you're providing new tanks, providing everything that's needed under the covers customer needs to bring their own global server load balancing for client facing activities so, or Dns and likewise bring their own Dns server as well. Yeah. Dns, And you're you're right? I mean, yeah. So we we get into situations with with the Ands. I mean, you can do simple round Robin.
177
00:37:02.440 --> 00:37:06.329
Philip Sellers: But that's not really intelligent. And so
178
00:37:06.330 --> 00:37:18.209
Philip Sellers: bringing intelligence to the conversation. A Gslb. Which is going to route users to the closest. You graphic region really becomes advantageous to customers in.
179
00:37:18.360 --> 00:37:39.589
Philip Sellers: I would suspect most customers of the scale who are enterprise customers that really want to take care of this or take advantage of this Federation. They're probably already gonna have something in their portfolio to be able to provide it. And if not, I mean, you know, azure aws, there are great Gslv solutions and cloud providers as well.
180
00:37:39.910 --> 00:37:46.049
Jirah Cox: yeah, I'm getting picture a way to get here with using like active Directory zones and sight away records.
181
00:37:46.100 --> 00:37:58.690
Jirah Cox: And I can picture that quickly turning into an argument for buying Gslb as well. There's things you can do. And there's things that you actually want to actually own as the full solution.
182
00:37:58.790 --> 00:38:09.569
Ben Rogers: I know this is still fairly new for our portfolio. But have we had any clients use this in a sense of differentiating between on Prem and cloud as 3 buckets.
183
00:38:10.120 --> 00:38:16.350
Jirah Cox: We see my good question, Ben, so well, and to be, and to be super clear for our listeners. Right? So objects itself is
184
00:38:16.490 --> 00:38:35.269
Jirah Cox: go from memory here, 4 and a half, 5 years old. It's actually like super super, mature as a as an on-prem like the kinics owned and managed like this is version 4, that we're kind of highlighting here and some of the features. So the Federation is, that is the new part right? But you tanx as an S 3 object data store. Actually, kind of not so new.
185
00:38:35.270 --> 00:38:50.490
Jirah Cox: really. But but still, I mean, amazing question, Ben, so like, yeah, for sure. I mean, it's really part of when we say that in tanx is a cloud platform, right? And I don't say public or private anymore. It's all, of course, hybrid multi-cloud, because it's your cloud that goes wherever you want to run your business, whether that's
186
00:38:50.490 --> 00:39:06.659
Jirah Cox: phone or rent, or your building or cola or partner data center, or the integral data center or public cloud, or anything like it's your cloud man like, where do you want your cloud to be? And does your clouding to be scalable and rented? Or it doesn't, is it, you know, pretty static and owned, or all these things?
187
00:39:06.820 --> 00:39:10.570
Jirah Cox: But from a capability standpoint of what does your cloud probably need to have
188
00:39:10.580 --> 00:39:37.659
Jirah Cox: to support even today's apps like backup and tomorrow's apps like cloud native and container as applications. S. 3, as a protocol is sort of like something that you probably as a cloud provider to your business, which is sort of like my my thought technology for, like how you need to own and run it as a company. if you're a customer, right? Is like, I am a cloud provider to the business that can involve. You know I integrate all these vendors, and I can choose to make public cloud or private cloud, or any of that a component of that.
189
00:39:37.890 --> 00:39:42.630
Jirah Cox: It's like S. 3. Just got to be there, probably right? Because your developers. if they're developing apps.
190
00:39:42.670 --> 00:40:00.630
Jirah Cox: probably on a long of timeline, are gonna want to see this kind of capability, this kind of storage. even even your other vendors, right? I don't like. I've highlighted them here like at least, and probably not loaded to like your rubric's. Co. Used to these compuls. Beams. Want to speak s. 3 somewhere. so it just kind of becomes more of a more and more of a
191
00:40:00.630 --> 00:40:20.179
Jirah Cox: closer and closer to mandatory offering that you have to have same way. Similarly, like 30 years ago, was like. Do you offer Dns right? Maybe you said No, right? No, we do all you know. I don't know net buoy, but like eventually you got to a point where you said, yes, we offer Dns. And so now it's like, Do you offer? S. 3, and that's becoming more and more of a yes, elsewhere.
192
00:40:22.240 --> 00:40:33.660
Philip Sellers: Yeah, I mean it. It definitely seems to be the de facto object store language. I mean, I know I know Microsoft's got their own version of Asher blob. But
193
00:40:33.700 --> 00:40:39.100
Philip Sellers: almost everyone talks. S, 3. I mean. yeah, it's
194
00:40:39.520 --> 00:40:54.609
Philip Sellers: it's the most widely adopted. And to your point I I use scale out backup repositories inside of the you know, we were sending data out to an S. 3 storage provider, and
195
00:40:54.610 --> 00:41:21.910
Philip Sellers: you know it. It. It worked extraordinarily well for offsite immutable backups and in the latest version they can back up direct. Theme can do back up directly to S. 3 storage. So it it makes a new tanks cluster running Newtonics objects, a viable backup target. So those new avenues without adding additional operational complexity and things like that, you could.
196
00:41:21.980 --> 00:41:29.089
Philip Sellers: you know. Do it an entire backup base, Newton. It's cluster to the side, or, you know, possibly
197
00:41:29.100 --> 00:41:54.579
Jirah Cox: a a lower tier Archive along with your backup storage. So I I think the opportunities are huge for customers as they look at the landscape which is changing over the last few years. Yeah, to your point. I mean to your point. Fill up and into your question. Then, like why would I look at the tanks for that? Because we're, you know. Candidly, we're not like a household name yet as a backup target right in. Tunex is not
198
00:41:54.580 --> 00:42:10.149
Jirah Cox: commonly thought of that way. I don't. I don't see customers, you know, coming to me and saying, Hey, you know, giving an unsolicited proposal for a backup cluster. It's a thing we can do, but but we're still getting the word out there about that. What? What? Why would I want to before we've highlighted the the operational like sort of lifecycle differences like
199
00:42:10.150 --> 00:42:39.619
Jirah Cox: running on Newton's back cluster as easy as running any other in the next cluster, which is super super easy. both on day, one and day 1,500 when you want to refresh it. But also I see proposals all the time that have, you know, competitive financial comparisons against like public clouds. 3. Storage where we're saving the customer, maybe 30 to 50 against cloud storage. So better, better, better price for pound for these quote cheap and deep workloads. but also no egress charge right like this can be a cluster. Live next to your
200
00:42:39.690 --> 00:42:42.940
Jirah Cox: backup appliance or your production cluster.
201
00:42:43.050 --> 00:42:53.429
Jirah Cox: but I can get to that data for Free versus being like, Oh, I don't want to know big restore out of the cloud, because I'm gonna get hit with that cloud bill for the Cloud egress charge for the month. So pretty pretty differentiated solution there.
202
00:42:53.600 --> 00:43:11.269
Philip Sellers: and and there are alternatives or competitors, I guess I should say, competitors to Aws, who who obviously invented s. 3, created s. 3. There are whole businesses that exist trying to solve the same problem of egress charges. you know, as as cloud providers.
203
00:43:11.310 --> 00:43:24.200
Philip Sellers: you can do one better by getting it into the same data center or into other colos for further savings where you're doing the management instead of paying a cloud provider to manage your your S. 3 storage.
204
00:43:25.270 --> 00:43:33.360
Philip Sellers: Well, guys, I think this was an amazing topic and amazing session for us today. I know, I learned a lot then
205
00:43:33.630 --> 00:43:36.230
Philip Sellers: sticking out for you.
206
00:43:36.270 --> 00:44:00.809
Ben Rogers: Oh, yeah, man, just again, the new tanks. Unified storage systems are a great story for our company. man, I need to learn more about it, because it is somewhere that is another arsenal in my toolbox to present to clients. But appreciate you. Let me join today, definitely learned a lot about the Federation process of this and excited to get out and ask some question. My customers see how this would be useful for them.
207
00:44:00.910 --> 00:44:30.409
Philip Sellers: Yeah, I I agree with you. I think this is some of the the exciting stuff Newton's is definitely delivering. So it's it's great gyra, really appreciate you unpacking this and bringing it the topic forward for us today. My pleasure, man, thank you for the time. Thank you for it to my buddy. Steve here forget writing such an excellent blog post as well
208
00:44:30.490 --> 00:44:49.609
Philip Sellers: on behalf of zintagra Vera. And then I want to say, thank you for carving out a little time and listening to our podcast today. we will certainly be back with more and hope you'll join us for a future podcast episode. Until then. I'm Phil sellers for Z integra. And we'll catch you next time.