WEBVTT 1 00:09:15.389 --> 00:09:21.089 Yeah, I've tried that to her. 2 00:11:21.089 --> 00:11:21.208 Okay, 3 00:11:21.234 --> 00:12:24.293 Eva. 4 00:12:26.908 --> 00:12:30.989 Okay. 5 00:12:30.989 --> 00:12:37.288 Beautiful Thank you. So today I get to shut up and you guys get to talk. 6 00:12:37.288 --> 00:12:42.989 So, I got a list on the web page. You can look at. 7 00:12:42.989 --> 00:12:53.729 I'll just I just listed people in order that you replied so, to your point earlier, you got 1st choice, but you also got. 8 00:12:53.729 --> 00:12:57.089 Um, to speak 1st, so I don't know. 9 00:12:57.089 --> 00:13:00.208 8, 9 minutes each, I'm not go and force it rigidly. 10 00:13:00.208 --> 00:13:04.948 If we run over, then we can actually continue on Thursday. 11 00:13:04.948 --> 00:13:13.078 But before I before I, um. 12 00:13:13.078 --> 00:13:16.379 You start talking 1st for Thursday. 13 00:13:16.379 --> 00:13:23.578 Totally, coincidentally, there is a very interesting quantum computing talk. There are. 14 00:13:23.578 --> 00:13:28.918 A circle, and you can look at it on the on the blog for the course. 15 00:13:28.918 --> 00:13:42.149 There's several main computing quantum computing technologies I'll get to them later is cube, using joses of junctions. There's something called trapped ions and then there's quantum annealing. 16 00:13:42.149 --> 00:13:46.349 So, 1 of the leaders of the trapped Ion. 17 00:13:46.349 --> 00:13:54.089 Technology is speaking on Thursday zoom meeting of course, starting at 1130 with New York. 18 00:13:54.089 --> 00:13:59.818 Create some group part of related to Sony, so I put it on the blog. 19 00:13:59.818 --> 00:14:08.278 Um, you'd have to register in advance and you have to register by the day before. So I can't make this an official course requirement. 20 00:14:08.278 --> 00:14:20.938 However, because it's not completely in class time, it's an hour earlier, but I strongly recommend it. If you're free, you register and you watch it. The title is quantum computing with Adams. 21 00:14:20.938 --> 00:14:25.528 And so, by the founder of AI on Q and a profit Duke. 22 00:14:26.759 --> 00:14:32.458 Yeah, stacked on also on the blog I put in a homework for. 23 00:14:32.458 --> 00:14:40.499 Next week, but okay, so now the 1st, we'll just go and order for you to check the blog and. 24 00:14:40.499 --> 00:14:44.129 You and she Justin Dan Joseph Jack. 25 00:14:44.129 --> 00:14:47.788 Isaac plain mark, Ben, and maybe Connor. 26 00:14:47.788 --> 00:14:52.708 So, 1st off would be in cheap talking about Python. 27 00:14:52.708 --> 00:14:56.489 You have the floor if you're here. 28 00:14:56.489 --> 00:14:59.938 My students are now. 29 00:15:02.399 --> 00:15:05.428 Your screen yeah, so that's something quick. 30 00:15:05.428 --> 00:15:09.808 Yes, so let me start with a parallel processing pie. So. 31 00:15:09.808 --> 00:15:22.288 So, Python is longer design for making program easier. So basically all the API is a very high level and I'm hearing the 1st slide. I, I'm launching a. 32 00:15:22.288 --> 00:15:23.423 Parallel API, 33 00:15:23.453 --> 00:15:29.063 cause threading in Python this might be the lowest level in Python, 34 00:15:29.303 --> 00:15:33.894 but in the official documents is a practice as a higher level, 35 00:15:34.403 --> 00:15:39.053 because it's slightly higher than the old strand. 36 00:15:41.933 --> 00:15:54.504 A strand packaging Python is no longer recommended in Python 3. so, um, reading is basically so lowest package for Python. 37 00:15:55.403 --> 00:16:03.624 So this is a packet that's used on the strat class to control multi strand processing as just like, 38 00:16:03.624 --> 00:16:03.984 um, 39 00:16:04.014 --> 00:16:04.254 P, 40 00:16:04.254 --> 00:16:05.094 striking C, 41 00:16:07.073 --> 00:16:07.913 class support, 42 00:16:07.913 --> 00:16:08.183 like, 43 00:16:08.183 --> 00:16:10.104 4 major functions starch, 44 00:16:10.134 --> 00:16:10.583 run, 45 00:16:10.854 --> 00:16:12.144 joint and Cassie. 46 00:16:12.984 --> 00:16:18.774 A name of the process, and I'll get right as a sample code for the usage artist writing package. 47 00:16:19.073 --> 00:16:31.734 So here, or we can see, we can create a spread object using a function command and see, we're going to step into the functions. It's very similar to. 48 00:16:34.254 --> 00:16:48.774 I think someone was going to introduce later and so we can also use a star to start running the process with a signed function and we can 49 00:16:48.803 --> 00:16:54.024 join all the stress into back into his main function. 50 00:16:54.024 --> 00:16:56.214 Main thread after we have the result. 51 00:16:57.533 --> 00:16:57.833 Yep, 52 00:16:58.193 --> 00:17:02.063 and so another very popular for, 53 00:17:02.244 --> 00:17:02.933 um, 54 00:17:03.114 --> 00:17:07.134 for parallel processing is called multi processing and phase 1, 55 00:17:07.523 --> 00:17:07.824 um, 56 00:17:08.753 --> 00:17:10.163 very high level and convenient, 57 00:17:10.403 --> 00:17:10.644 uh, 58 00:17:10.673 --> 00:17:11.364 programming, 59 00:17:11.574 --> 00:17:11.993 um, 60 00:17:12.023 --> 00:17:16.973 to to all the like a multi when processing stuff. 61 00:17:17.814 --> 00:17:18.324 So, 62 00:17:19.374 --> 00:17:19.614 like, 63 00:17:19.644 --> 00:17:22.104 a difference between this multi processing ends, 64 00:17:22.104 --> 00:17:25.432 the previous spreading packages, 65 00:17:25.432 --> 00:17:25.703 see, 66 00:17:25.703 --> 00:17:28.013 Paul object so the PO, 67 00:17:28.013 --> 00:17:40.733 object can the execution offer function across multiple and can also provide a convenient usage of distributing the data across a process and so on the top, 68 00:17:40.733 --> 00:17:53.723 right here is us on the call for support object so basically we can create a PO object and I'm using my function to run the assigned function across seas arrivals. 69 00:17:54.118 --> 00:18:02.729 So, just so I read into English to Paul, and Paul will take care of everything. 70 00:18:02.729 --> 00:18:09.628 Here, I'll also lower a rights is for the process. 71 00:18:09.628 --> 00:18:19.138 Process object, so it's it's I think it's very similar to the previous us reading so we can use all the function and arguments. 72 00:18:19.138 --> 00:18:25.169 So, let's here's why the same and also, another part is the project's ID. 73 00:18:25.169 --> 00:18:39.568 Just see, we can use, like, you see the O. S to get a low level information such as the ID of IRI process and stress. 74 00:18:41.394 --> 00:18:55.193 And the next part is the communication between each process and so pipe pipe is covering C. and so it's Q as new for me because I never used it in before. 75 00:18:55.554 --> 00:18:56.723 So this Q. 76 00:18:57.749 --> 00:19:03.114 So, on the official document, it's this conscious queue object as a stride and safe. 77 00:19:03.683 --> 00:19:14.364 I can can finalize a specific specific explanation to show why and how system is spread and process safe. 78 00:19:16.584 --> 00:19:29.963 Usage is just create a queue object and you can just throw a process into it. And in the process you can, like, right. Writes data to a queue. And also after an average that you can like. 79 00:19:30.808 --> 00:19:34.199 But just get data from the, from the queue. 80 00:19:34.794 --> 00:19:38.394 And the pipe is very similar in a pipe, 81 00:19:38.423 --> 00:19:38.693 a, 82 00:19:38.693 --> 00:19:39.443 in C, 83 00:19:39.773 --> 00:19:51.114 and supports to have to and a parent and and and and so 1 job to process, 84 00:19:51.114 --> 00:19:54.294 try to read or write to the same and apply that same time. 85 00:19:54.834 --> 00:19:55.284 So, 86 00:19:55.284 --> 00:19:55.763 um, 87 00:19:55.794 --> 00:19:56.213 yeah, 88 00:19:56.273 --> 00:20:05.034 test pipe communication is similar to synchronization as a encouraging and seeing, 89 00:20:05.064 --> 00:20:05.513 obviously, 90 00:20:05.513 --> 00:20:11.453 with a lot to make sure that there's only 1 around it's running on the same time. 91 00:20:11.453 --> 00:20:18.743 So we can make sure the output is lie in the correct order. So, we won't sell, we won't face any like, synchronization arrow. 92 00:20:20.003 --> 00:20:23.394 And for shared memory in Python, 93 00:20:23.574 --> 00:20:25.913 a processing package, 94 00:20:25.943 --> 00:20:36.294 we can use of 2 embedded embedded special structure called value and array to implement share memory function. 95 00:20:36.534 --> 00:20:44.693 For example, in this example, we can create a share share memory value here and every here and. 96 00:20:45.028 --> 00:20:53.038 In each scratch, we just assign the value and our rights are valid into the. 97 00:20:53.038 --> 00:21:00.148 So, there's an alpha, it's a shows. Adsi function is running. The parallel function is running correctly. 98 00:21:02.334 --> 00:21:02.574 Yeah, 99 00:21:02.993 --> 00:21:15.594 and another important thing is and deep learning because Python now is most likely and your sandwich for machine learning and deep learning and there are 2 major packages for machine learning, 100 00:21:15.594 --> 00:21:17.364 which is pie torch and TensorFlow. 101 00:21:17.669 --> 00:21:22.949 So, it's out of scope related to the machine learning part, but it's important for Python. 102 00:21:23.903 --> 00:21:37.854 Mentioned here, so so people use people or independent people use Kodak to accelerate. So calculation and so basically included in intake learning. 103 00:21:37.884 --> 00:21:49.644 We come word is because a CPU is good at 1 kind of data calculation and grant another. 104 00:21:49.763 --> 00:22:01.074 I don't remember what exactly the data structure is so basically, so downright as a manager. So all I'm saying is pretty high level. 105 00:22:01.074 --> 00:22:02.993 So you can either code quite easily, 106 00:22:03.233 --> 00:22:08.663 so you can get a device if you have a machine, 107 00:22:08.993 --> 00:22:10.433 and you can just basically, 108 00:22:11.124 --> 00:22:23.394 in each cent age data to that and this is called will automatically convert data type to answer and it can just send an auto dealership. 109 00:22:23.394 --> 00:22:33.894 Here to do is I'll complete all the coulda shoulda is a parallel computing package to utilize to do all the calculation. 110 00:22:34.229 --> 00:22:38.548 It says everything I have a question. 111 00:22:42.179 --> 00:22:48.838 Yeah, thank you very much. If you're willing to make your sides available. I'll put them up on the website on the blog. 112 00:22:48.838 --> 00:22:54.659 So, do you know 1, if no 1 else has a question. 113 00:22:54.659 --> 00:23:01.048 Then I'll give you 1, so you think using the GPU with. 114 00:23:01.048 --> 00:23:15.443 Python is easy in practice or in practice. It's very difficult. Well, most of my experiencing is a pie torch machine learning at least using chipping you a test to libraries. 115 00:23:15.443 --> 00:23:16.314 It's very easy. 116 00:23:16.709 --> 00:23:22.709 It's just basically, like, several counter of extra code to send to you or. 117 00:23:22.709 --> 00:23:26.398 I'll send you our data to. It's very easy. 118 00:23:26.398 --> 00:23:29.788 Thank you anyone else have a question. 119 00:23:30.898 --> 00:23:36.028 Okay, great. Okay. Our next. 120 00:23:36.028 --> 00:23:40.828 Chance to learn is from Justin book, teach us about so much stronger. Me. 121 00:23:40.828 --> 00:23:47.308 On parallel computers give me 3rd, let's see here. 122 00:23:47.308 --> 00:23:51.419 Let's do. 123 00:23:53.999 --> 00:23:57.959 Oh. 124 00:24:00.778 --> 00:24:04.588 Give me 1. SEC, I haven't done this before. 125 00:24:11.699 --> 00:24:16.769 Here we go. Okay. 126 00:24:20.729 --> 00:24:23.939 Can you? I see this. All right. 127 00:24:25.439 --> 00:24:37.169 All right, so, yeah, so I chose a parallel competing with astronomy, so I'm going to be focusing on study, which is the search for. 128 00:24:37.169 --> 00:24:43.019 Extra terrestrial intelligence, so quickly, just some background history. 129 00:24:43.019 --> 00:24:46.378 Most a city projects are based on. 130 00:24:46.378 --> 00:24:54.388 Looking for radio transmissions. Uh, so for the 1st, uh, more modern attempt to protect the interstellar radio transmission was called. 131 00:24:54.388 --> 00:24:57.929 A project as knowledge took place in the 19 sixties. 132 00:24:57.929 --> 00:25:05.159 It was conducted by Frank the, and which is the National radio astronomy observatory. 133 00:25:05.159 --> 00:25:10.169 In West Virginia, I'm using a a foot radio telescope. 134 00:25:10.169 --> 00:25:17.368 And it focused on the 1420 megahertz, which is the same thing as a 21 centimeter wavelength. 135 00:25:17.368 --> 00:25:20.848 And they chose this because it's in astronomy, this is like the. 136 00:25:20.848 --> 00:25:26.939 The wavelength the lowest energy hydrogen atoms. 137 00:25:26.939 --> 00:25:31.739 Put out and because, and possibly because of the most common. So I'm looking for. 138 00:25:31.739 --> 00:25:36.419 Extra terrestrial life, you think that you should look at the most common places or the most abundant. 139 00:25:36.419 --> 00:25:40.888 Like, kind of like gas areas and of course. 140 00:25:40.888 --> 00:25:52.618 They found nothing otherwise you would have heard about it. So, since this was the kind of 1st, modern tempted, kind of inspired other countries, such as the Soviets to search with the Omni directional antennas. 141 00:25:52.618 --> 00:25:56.878 And they hope to pick up a stronger radio signals. 142 00:25:56.878 --> 00:26:01.618 And then NASA also started funding a different number of city projects after this as well. 143 00:26:01.618 --> 00:26:04.858 And just a little more specific on. 144 00:26:04.858 --> 00:26:08.249 What this radio task really pointed at, uh, focused on. 145 00:26:08.249 --> 00:26:15.388 2 stars called Epsilon, Donnie, and which are both around 11 light years away from hers. 146 00:26:15.388 --> 00:26:21.479 So the next major project was called Project Phoenix. 147 00:26:21.479 --> 00:26:27.568 And it was the most sensitive and comprehensive search for intelligence using the. 148 00:26:27.568 --> 00:26:32.368 Ah, or and later ends of its project using the. 149 00:26:32.368 --> 00:26:39.058 Satellite and Brad printer Rico, which I actually recently collapsed due to structural failure. 150 00:26:39.058 --> 00:26:43.318 So this project observed 100 sound like stars. 151 00:26:43.318 --> 00:26:49.739 Within 200 years, and focusing on the 120300 megahertz frequencies. 152 00:26:49.739 --> 00:26:54.628 So, again, when I say something like stars, that means. 153 00:26:54.628 --> 00:26:59.009 Stars, I guess that may have have little habitable zones with plants. 154 00:26:59.009 --> 00:27:04.439 Uh, that could actually have lifeline it out and the product 1st started out. 155 00:27:04.439 --> 00:27:08.429 It was based in West Virginia, using a 150 foot. 156 00:27:08.429 --> 00:27:15.328 And our, a telescope, and because that telescope was had other projects as well, it can only be used about 50%. 157 00:27:15.328 --> 00:27:18.358 50% of the time for a study research. 158 00:27:18.358 --> 00:27:21.659 And then when I moved to Puerto Rico. 159 00:27:21.659 --> 00:27:26.249 Actually got even less time because that, uh, satellite, uh. 160 00:27:26.249 --> 00:27:29.939 That not satellite, but telescope the. 161 00:27:29.939 --> 00:27:33.088 Telescope was very popular, so it. 162 00:27:33.088 --> 00:27:37.348 70 research can only be conducted in 2, 4 week sessions. 163 00:27:37.348 --> 00:27:40.679 Which ends up, like, around 100 days, pretty. 164 00:27:43.078 --> 00:27:47.909 And then the telescope rate was developed by the study Institute. 165 00:27:47.909 --> 00:27:53.578 And Berkeley said he research center, and it was really developed specifically for study searches. 166 00:27:53.578 --> 00:28:00.449 So, it's kind of set up was using a bunch of small intestine instead of a big 1 like. 167 00:28:00.449 --> 00:28:03.479 And currently, it has 42 small, intense. 168 00:28:03.479 --> 00:28:06.689 The initial goal was to build around 350. 169 00:28:06.689 --> 00:28:13.318 But that's a that's really expensive. And they started off with 42, but then add anymore because loss of funding. 170 00:28:13.318 --> 00:28:18.419 And also, because they want to see how those for you to would perform in their study researches. 171 00:28:18.419 --> 00:28:25.138 And again, I focus on a bit wider frequency 1000 to 15000 megahertz. 172 00:28:25.138 --> 00:28:28.798 And it survey tens of thousands of bread for stars. 173 00:28:28.798 --> 00:28:32.788 And, of course, those newly discovered exit plants and habitable developed. 174 00:28:32.788 --> 00:28:39.509 So, the better thing about the task, right? What was that? It was specifically for so, instead of. 175 00:28:39.509 --> 00:28:47.098 A study research being only done maybe a 3rd of the time. Each year. This could be used 7 days a week. 176 00:28:47.098 --> 00:28:55.648 Uh, like, 24, 7, so, uh, not again to some of the how parallel computing helps with this research. 177 00:28:55.648 --> 00:28:59.909 Initial study projects used special supercomputers. 178 00:28:59.909 --> 00:29:03.328 At the location of the telescope, the process there. 179 00:29:03.328 --> 00:29:09.179 Data, but then, I think not a semi at home, which got released in 999. 180 00:29:09.179 --> 00:29:15.778 Uses a virtual supercomputer, which is a bunch of network connected computers. 181 00:29:16.949 --> 00:29:23.759 So, if you guys read the L. L and L article we saw in the 1st, day of class, we saw study, get mentioned. 182 00:29:23.759 --> 00:29:28.558 Under the distributed computing section, so I'm going to cover it. 183 00:29:28.558 --> 00:29:31.709 Again, again it's multiple computers. 184 00:29:31.709 --> 00:29:36.148 Each with their own multiple processors and distribute and, uh. 185 00:29:36.148 --> 00:29:43.469 Distributed memory connected by the network. So what said yeah, who could do, is it could if we wanted to volunteer our computers. 186 00:29:43.469 --> 00:29:48.898 They could connect them all together and use them to process the big chunks that I came into. The telescope. 187 00:29:48.898 --> 00:29:54.719 So, instead of the, which is the bottom right image, which is the uniform. 188 00:29:54.719 --> 00:30:03.088 Memory access, I believe it, but the network at a very high level would look like the other 2 diagrams with network connected. 189 00:30:03.088 --> 00:30:06.838 And I memory. 190 00:30:06.838 --> 00:30:14.368 So, again, more than sitting at home, I mentioned that it's voluntary competing so we give we download software. So. 191 00:30:14.368 --> 00:30:19.019 Study and the, and it's parent company or managing. 192 00:30:19.019 --> 00:30:26.939 Software called B O, ink boat is what it's called. Can use our computers to process their data. 193 00:30:26.939 --> 00:30:32.818 So the data analysis is definitely broken up into smaller pieces because. 194 00:30:32.818 --> 00:30:37.679 Data from the telescope came in 35 gigabyte chunks. 195 00:30:37.679 --> 00:30:42.659 Which is pretty big. Well, I guess the other is not that big, but. 196 00:30:42.659 --> 00:30:45.959 For them is pretty big and. 197 00:30:45.959 --> 00:30:50.669 The good thing about this, never computer, was that it could provide 600 plus terra flops. 198 00:30:50.669 --> 00:30:56.489 Of computing, power and flops if you don't know the stands for floating point operations per 2nd. 199 00:30:57.719 --> 00:31:03.509 And the image on the right here is just like a, what you said at home software would look like when it's printing out your computer. 200 00:31:03.509 --> 00:31:10.259 So, typically, I want to consume your entire computer instead of going out the screensaver when, when you're not using it. 201 00:31:10.259 --> 00:31:13.499 Or just in the background to make use of the process of time. 202 00:31:13.499 --> 00:31:16.858 Uh, that would otherwise be unused. 203 00:31:16.858 --> 00:31:22.769 So more specifically about the data again, I said the telescope data came in 35 gigabytes. 204 00:31:22.769 --> 00:31:28.409 And it had to be broken down into much smaller chunks, 25 megabytes to be exact. 205 00:31:28.409 --> 00:31:36.239 Because our computers can't handle that much data and also, because the Internet at the telescope was pretty slow. 206 00:31:36.239 --> 00:31:41.068 So, to transfer 35 gigabytes to back to the Berkeley Institute, where it gets. 207 00:31:41.068 --> 00:31:46.318 Process and then distributed to us, they had to break down the chunks to be sent. 208 00:31:46.318 --> 00:31:51.269 Little piece by piece, so the average home computer. 209 00:31:51.269 --> 00:31:55.348 According to the article I read, which was from, like, maybe 5 to 10 years ago. 210 00:31:55.348 --> 00:31:58.378 Said it took about 30 hours of process 1 work unit. 211 00:31:58.378 --> 00:32:03.358 Again, the computer's not that kidding all, it's all of its resources to. 212 00:32:03.358 --> 00:32:08.459 Study, which is probably why it takes much longer than if it's dedicated all of its resources. 213 00:32:08.459 --> 00:32:17.699 So even with that timeframe and about 140000 work units that leads up to about 4.2M hours of computation. 214 00:32:17.699 --> 00:32:20.788 So, again with the 600. 215 00:32:20.788 --> 00:32:28.618 Terra flops I said before that the work for 35 gigabytes of data could typically be done in. 216 00:32:28.618 --> 00:32:34.858 A little more than a day and just allows you to wrap up. I mentioned. 217 00:32:34.858 --> 00:32:38.278 So Inc, which managed to stay at home project. 218 00:32:38.278 --> 00:32:42.028 Uh, unfortunately, Saturday at home stopped in March 2020. 219 00:32:42.028 --> 00:32:49.048 But that Berkeley open infrastructure for network computing does still have similar projects. I use network. 220 00:32:49.048 --> 00:32:52.259 Connected computing or supercomputer. 221 00:32:52.259 --> 00:32:59.578 And as 31 similar active projects that can relate to a lot of topics like math, astronomy. 222 00:32:59.578 --> 00:33:05.009 And even biology, so, uh, that's it on my end. 223 00:33:05.009 --> 00:33:09.298 Any questions. 224 00:33:09.298 --> 00:33:13.409 Thank you questions anyone. 225 00:33:13.409 --> 00:33:20.969 You know, over in physics, at they're running 1 of those projects, the 1, Milky way the compute. 226 00:33:20.969 --> 00:33:24.598 Analyzed Milky way. Oh, really? 227 00:33:24.598 --> 00:33:28.288 I think at that point, yeah, so. 228 00:33:28.288 --> 00:33:34.229 Okay, as I found it and haven't found anything yet, but that they're willing to talk about. 229 00:33:34.229 --> 00:33:42.148 At least okay, thank you. So, Dan teaches physics. 230 00:33:42.148 --> 00:33:48.209 Hello, I don't have a presentation, but I'll turn my video on so they can. 231 00:33:48.209 --> 00:33:54.209 Interact okay, so. 232 00:33:54.209 --> 00:33:58.318 Talk about parallel computing and physics. 233 00:33:59.459 --> 00:34:03.028 I kind of did a little physics, my undergrad, but, uh. 234 00:34:03.028 --> 00:34:09.989 Researching it, I found a lot of mention in high energy physics, which is. 235 00:34:11.039 --> 00:34:14.849 Really small scale, so, atomic level, but also. 236 00:34:14.849 --> 00:34:24.239 Like, Universal scale, so, like, kind of going into this trauma part astrophysics, formation of galaxies and solar systems. 237 00:34:24.239 --> 00:34:27.929 Um, and a big part. 238 00:34:27.929 --> 00:34:31.829 Is simulations so running. 239 00:34:31.829 --> 00:34:39.509 How galaxies form how solar systems form? Different atomic things so. 240 00:34:39.509 --> 00:34:43.139 Kind of the 1st thing I read a paper on was a. 241 00:34:43.139 --> 00:34:48.539 Experiment run it CERN for the large. 242 00:34:48.539 --> 00:34:59.128 Uh, and it's it was called the N, a 48 experiment, which was a 3 year experiment, kind of simulating the. 243 00:34:59.128 --> 00:35:02.818 Sensors in the so. 244 00:35:02.818 --> 00:35:09.088 They were explaining how to what parts of the system they needed to parallelize. 245 00:35:09.088 --> 00:35:14.248 So, 1 of the 1st things they wanted to do, the categorized. 246 00:35:14.248 --> 00:35:19.079 Each kind of particle collision is an event. 247 00:35:19.079 --> 00:35:26.938 So the system is made of sensors, and as the particle passes through each sensor, they want it to paralyze. 248 00:35:26.938 --> 00:35:31.079 The data collection from each censor, so. 249 00:35:31.079 --> 00:35:35.639 They could not throttle any of the data collection. 250 00:35:35.639 --> 00:35:46.168 As well as storage, because they were storing, like, terabytes of information as they're running the simulation. So they wanted to paralyze any. 251 00:35:46.168 --> 00:35:49.619 Reading and writing from the disks. 252 00:35:49.619 --> 00:35:54.179 In the system then kind of. 253 00:35:54.179 --> 00:35:58.259 It was a very brief paper and very complicated. So it's kind of. 254 00:35:58.259 --> 00:36:08.369 A little difficult to understand if you were 1 of the engineers. So, then moving on another big part of physics is the. 255 00:36:08.369 --> 00:36:18.119 Engineering physics, so kind of material science and fluids fluid dynamics so to get a more accurate simulation, you want to smaller. 256 00:36:18.119 --> 00:36:23.579 Scale the more close you can get to like, atomics size the better. So. 257 00:36:23.579 --> 00:36:26.818 Obviously, 2 dimensions. 258 00:36:26.818 --> 00:36:30.148 It won't be as. 259 00:36:30.148 --> 00:36:41.429 Cpu intensive or card intensive, but as you go into scale decreases, or you increase dimensions, you want to paralyze that. So, every time step. 260 00:36:41.429 --> 00:36:48.568 You paralyze doing solutions on each cell. 261 00:36:48.568 --> 00:36:54.778 And increase the timestamp. So paralyzing that huge advantage. 262 00:36:54.778 --> 00:37:01.259 Used in modeling fuel flow through few lines of, like. 263 00:37:01.259 --> 00:37:05.938 Spaceships jet engines modeling. 264 00:37:05.938 --> 00:37:11.429 A fluid flow through a jet engine so the air temperature, everything. 265 00:37:11.429 --> 00:37:16.498 And then kind of in the material science aspect. 266 00:37:16.498 --> 00:37:20.489 If you have some structure. 267 00:37:22.079 --> 00:37:25.498 Like, a model of some material. 268 00:37:25.498 --> 00:37:29.099 And you want to simulate it's. 269 00:37:29.099 --> 00:37:32.818 Under some force or something, you. 270 00:37:34.259 --> 00:37:38.668 Want to paralyze each. 271 00:37:38.668 --> 00:37:45.119 Piece of it so that when it got a bump okay. 272 00:37:47.518 --> 00:37:50.998 Yeah, so on thermodynamics also big thing. 273 00:37:50.998 --> 00:37:55.918 Modeling temperature transfer through material, so. 274 00:37:55.918 --> 00:38:03.059 Modeling its properties, and then flow the temperature through the material. 275 00:38:03.059 --> 00:38:11.639 And going into astronomy, Justin was talking about kind of different part of it. Astrophysics. So. 276 00:38:11.639 --> 00:38:14.730 Physicists want to know how. 277 00:38:14.730 --> 00:38:20.820 You know, early universe was planets formed how solar systems formed, how. 278 00:38:20.820 --> 00:38:30.179 So, they simulate multi body interactions really expensive things simulating. 279 00:38:30.179 --> 00:38:37.139 Thousands of particles under gravitational force. It's really nice if you can. 280 00:38:37.139 --> 00:38:43.320 Paralyze the forces on each of the bodies, all the interactions. 281 00:38:43.320 --> 00:38:48.059 And hugely reduces the time to. 282 00:38:48.059 --> 00:38:51.480 From start to end in the simulation. 283 00:38:51.480 --> 00:38:54.840 Oh, yeah. 284 00:38:54.840 --> 00:38:58.349 Any questions. 285 00:39:01.050 --> 00:39:05.070 I'll start with 1 what are their hardest problems? 286 00:39:05.070 --> 00:39:10.079 And using parallel computers, and the big limitations that they find. 287 00:39:10.079 --> 00:39:17.039 Um, I feel. 288 00:39:17.039 --> 00:39:22.829 Big experiments, it's the sheer scale of the actual experiment. 289 00:39:22.829 --> 00:39:28.619 So, it's not that there's 1 programmer working on the system, you have such. 290 00:39:28.619 --> 00:39:31.949 It's such a big system with so many computers. 291 00:39:31.949 --> 00:39:38.010 Managing the parallel of the entire system seem to be a really big problem. 292 00:39:38.010 --> 00:39:49.500 They are saying, you know, you have teams of programmers working to make this software and there would be updates to the software every few months to make the. 293 00:39:49.500 --> 00:39:55.139 The simulation on better, and that was the 1 that seemed to be a big problem. 294 00:39:56.880 --> 00:40:00.119 Oh, thank you. Yep. Anyone else have any questions. 295 00:40:00.119 --> 00:40:04.619 No, okay. 296 00:40:04.619 --> 00:40:07.829 Great thanks. 297 00:40:07.829 --> 00:40:12.179 Yeah, welcome, Joseph about teach us something about friends. 298 00:40:22.500 --> 00:40:28.139 No, not online. Well, we'll. 299 00:40:30.239 --> 00:40:34.469 Let's see. 300 00:40:37.079 --> 00:40:46.949 Oh, oh, Joseph isn't here yet? Um. 301 00:40:48.329 --> 00:40:58.829 Jack, did you hear Jack? Thank you Dan talk about folding at home and we'll get back to Joseph later. Yes. 302 00:40:58.829 --> 00:41:07.800 Um, I also didn't prepare any slides for this, so I will turn my camera as well. 303 00:41:07.800 --> 00:41:11.730 Um, and then I guess I will share from. 304 00:41:18.719 --> 00:41:23.460 Come on, you guys see. 305 00:41:25.469 --> 00:41:30.989 I can see it. Cool. So I took the time to look at a. 306 00:41:30.989 --> 00:41:35.820 As similar to Justin said a distributed computing over the Internet project. 307 00:41:35.820 --> 00:41:41.849 This one's called folding at home, so the name comes from well, let's start with the project. 308 00:41:41.849 --> 00:41:47.699 The goal of the project is to simulate protein dynamics that happen in cells. 309 00:41:47.699 --> 00:41:53.159 And this is this is useful for researching disease. 310 00:41:53.159 --> 00:41:59.070 Viruses any, and things like that, that interact with proteins within the body. 311 00:41:59.070 --> 00:42:02.369 Um, so. 312 00:42:02.369 --> 00:42:08.789 Compared to a typical supercomputer a protein folding simulation obviously is a very complex. 313 00:42:08.789 --> 00:42:14.610 Computation and would take very long to compute on a typical supercomputer. 314 00:42:14.610 --> 00:42:18.630 So, being a distributed. 315 00:42:18.630 --> 00:42:28.079 Project what this, what this folding at home does is send out different tasks to different computers all over the world. 316 00:42:28.079 --> 00:42:32.250 Through the Internet, those computers will then crunch numbers. 317 00:42:32.250 --> 00:42:37.800 And send the data back to the researchers working on the project where they can analyze and. 318 00:42:37.800 --> 00:42:41.909 And do other things with the data um. 319 00:42:41.909 --> 00:42:45.449 Yeah, that's how the that's how this project really is. 320 00:42:45.449 --> 00:42:48.599 Gets its potency and parallelization, whereas. 321 00:42:48.599 --> 00:42:53.460 Instead of being computed all in 1 place, everything is distributed over many computers. 322 00:42:53.460 --> 00:42:59.699 So this is also a volunteer project you can download their client and donate. 323 00:42:59.699 --> 00:43:05.610 Your spare compute resources to this project. If you have capable systems. 324 00:43:05.610 --> 00:43:11.909 I had ran it a little bit last year in the midst of the real height of the pandemic and I'm. 325 00:43:11.909 --> 00:43:17.849 Just to give you guys an idea, my system has 2070 and to. 326 00:43:17.849 --> 00:43:22.380 To compute 1 of these jobs on average took around 3 to 4 hours of. 327 00:43:22.380 --> 00:43:26.400 100% utilization. 328 00:43:26.400 --> 00:43:30.719 So, this past this project has been going on for a while, but. 329 00:43:30.719 --> 00:43:33.989 And this past March 2020, last year. 330 00:43:33.989 --> 00:43:37.739 It really gain some traction with the cobit 19 outbreak. 331 00:43:37.739 --> 00:43:42.539 And there was a significant number of volunteers who signed up. 332 00:43:42.539 --> 00:43:47.070 And started folding, has it's called with their own systems. 333 00:43:47.070 --> 00:43:50.400 So, for those of you who are researching the. 334 00:43:50.400 --> 00:43:54.900 The top 500 super computers mind technically is faster. 335 00:43:54.900 --> 00:43:58.769 With speeds of 2.5 x flops at the. 336 00:43:58.769 --> 00:44:02.219 Peak of usage last year. 337 00:44:02.219 --> 00:44:09.119 Um, so what I find interesting is an issue that they ran into with all of the. 338 00:44:09.119 --> 00:44:17.820 Increased in users on this system was there weren't enough servers to distribute the jobs around. It's not that there wasn't enough. 339 00:44:17.820 --> 00:44:21.329 Compute resources to do all the tasks. It's the. 340 00:44:21.329 --> 00:44:25.829 The sending and receiving of data and really the parallelization that. 341 00:44:25.829 --> 00:44:30.510 Became the bottleneck for this system. 342 00:44:30.510 --> 00:44:36.360 So, yeah, if anyone's interested in this, you can check out their website. I'm on it right now. 343 00:44:37.380 --> 00:44:41.969 There's a tab for cobit 19 where you can read about how it's. 344 00:44:43.409 --> 00:44:48.360 Simulating protein dynamics for the cobit 19 virus. 345 00:44:48.360 --> 00:44:52.079 And, um, yeah, currently, there is a. 346 00:44:57.570 --> 00:45:00.989 Apparently, there is a moonshot Sprint that started. 347 00:45:00.989 --> 00:45:05.760 This January, and they're trying to reach a target goal and. 348 00:45:05.760 --> 00:45:11.760 Crunch X, amount of numbers so if you guys are interested, check out their website. 349 00:45:11.760 --> 00:45:15.780 And, yeah, if anyone has any questions. 350 00:45:15.780 --> 00:45:23.130 I can answer them. 351 00:45:23.130 --> 00:45:30.059 So well, I'll start with the question then, so I'm a really ignorant person in biology. So what. 352 00:45:30.059 --> 00:45:33.090 Does protein unfolding mean and why is it hard. 353 00:45:33.090 --> 00:45:36.269 Um, so. 354 00:45:36.269 --> 00:45:44.670 Protein folding is how different proteins will take different shapes as they encounter different molecules in the body. 355 00:45:44.670 --> 00:45:51.329 Um, and this is very hard to simulate because it needs to be done on an atomic level. 356 00:45:51.329 --> 00:45:56.309 And when we get that low level, there are very many things to keep track of. 357 00:45:56.309 --> 00:46:01.500 Which is why a project like this caters to parallelization. Really well. 358 00:46:01.500 --> 00:46:06.269 And utilizing GPU cores to distribute out many atoms. 359 00:46:06.269 --> 00:46:09.630 And proteins and calculations. 360 00:46:11.519 --> 00:46:15.960 Right to answer that. Yeah Thank you. Anyone else have a question. 361 00:46:18.539 --> 00:46:26.099 Also be a topic when we, if you're able to watch that um, I, on cute talk on Thursday. That's. 362 00:46:26.099 --> 00:46:31.019 1 of the hopes for quantum computers that they'll do things like this. Well. 363 00:46:31.019 --> 00:46:34.110 So, it still it again. 364 00:46:34.110 --> 00:46:40.349 We can get everyone on this week and hopefully get out of the pandemic faster. If we find something useful. 365 00:46:40.349 --> 00:46:49.469 Yeah, in the research, so I thought was relevant topic. Yeah, sure. From time to time, computers are useful. Okay. 366 00:46:49.469 --> 00:46:53.099 So, thanks, Isaac. 367 00:46:53.099 --> 00:46:56.099 What is can I. 368 00:46:56.099 --> 00:47:01.469 Come back. Sorry my computer crashed when I was supposed to present Matt present the P Fred's. 369 00:47:02.760 --> 00:47:06.869 The, I didn't, I didn't understand, you. 370 00:47:06.869 --> 00:47:19.619 I didn't get a chance to present the Pete. Fred's. I remember that. I was oh, sorry. I didn't realize this is Joseph. Yeah, Peter decided it was a great time to give me the blue screen of death. 371 00:47:19.619 --> 00:47:28.889 Oh, that's been known to happen to me during classes actually. Sure. Goes. You want to go now? Thanks. Sure. Thank you. Sorry? I didn't mean for that to happen. 372 00:47:28.889 --> 00:47:38.909 All right, so I'm just going to talk about paper as really quick so a bread. 373 00:47:38.909 --> 00:47:45.510 Basically think of it as an item that allows a program to run multiple. That's easiest way to think of it. 374 00:47:45.510 --> 00:47:50.849 So, in see, it's included using an external library from the Linker. 375 00:47:50.849 --> 00:47:55.349 I'm trying to remember what the exact link command is. I believe it's dash L. P. for. 376 00:47:55.349 --> 00:48:01.440 So, you'll share variables, which means, let's say you have in a ray. 377 00:48:02.639 --> 00:48:07.230 If you have an array and you modify it in 1 part of the program. 378 00:48:07.230 --> 00:48:11.099 In 1 thread, then that will be modified for all the threads. 379 00:48:11.099 --> 00:48:17.909 Meaning it's important to only have 1, Fred access at a time otherwise we're going to other problems. 380 00:48:17.909 --> 00:48:21.329 So, the creative residency, it's going to be a thread create. 381 00:48:21.329 --> 00:48:27.659 And you're going to wait on pay per meeting people join, that just means wait for the peak, right to finish your execution. 382 00:48:29.639 --> 00:48:34.829 So, why would we use these? Well, if you want to share memory, it can be nice. 383 00:48:34.829 --> 00:48:38.190 There are times where you want to do that, even though it can lead to a. 384 00:48:38.190 --> 00:48:43.769 A few bugs, if you're not careful, because it's going to be lighter weight than duplicating on the memory. 385 00:48:43.769 --> 00:48:48.480 And all alpha parallel computation, meaning you can make a large test small. 386 00:48:48.480 --> 00:48:52.230 So, if you add a 1M by 1M matrix. 387 00:48:52.230 --> 00:48:56.070 Would you add 2 agencies that are 1M by 1M using 1? Fred. 388 00:48:56.070 --> 00:49:02.730 It's that might not be the best example, but if you add 2, large matrices, you're going to take a long time. 389 00:49:02.730 --> 00:49:09.300 So, let's say you chopped it into 10 1000 by 1000 matrices then paste them back together. 390 00:49:09.300 --> 00:49:12.929 That would be quicker if you can compute all the sums that way. 391 00:49:12.929 --> 00:49:19.289 Then sticking back together so what you would do in that scenario is, you would say, let's go. 392 00:49:19.289 --> 00:49:27.539 From the 1st, 1000 items, then the next 1000 items, if you have a 1000 per computer, or in this case, you couldn't do. 393 00:49:27.539 --> 00:49:30.900 1st, 100000 items? 2nd 100000 items. 394 00:49:30.900 --> 00:49:34.050 Anywhere you get the point and then add those. 395 00:49:34.050 --> 00:49:38.550 That'll be quicker than just waiting on 1, Fred to compute everything. 396 00:49:38.550 --> 00:49:48.780 Sticking it back together, so it's lighter weight and processes so at least you can see which is where most of my peril experiences. So that's why I'm harping on it. 397 00:49:48.780 --> 00:49:54.179 It's lighter weight the fact that it's lighter weight means. 398 00:49:54.179 --> 00:49:57.989 It takes less resources, so if you're use a process. 399 00:49:57.989 --> 00:50:06.869 You will end up duplicating all of the variables and there's ways to share memory using memory keys, but it's not something necessarily want to deal with. 400 00:50:06.869 --> 00:50:12.329 If you use it for the memory will be shared, which will meet later. 401 00:50:12.329 --> 00:50:15.599 It will be use less resources of your computer. 402 00:50:15.599 --> 00:50:19.559 And it will overall be a little kinder on the hardware. 403 00:50:19.559 --> 00:50:27.239 In a process where a frame of reference processes use for paper, and just use people right? Create. 404 00:50:27.239 --> 00:50:31.079 So, there's a couple problems with Fred's. 405 00:50:31.079 --> 00:50:41.039 1 of the biggest is debt mark so, in 1 of the ways, this is commonly illustrated is the dining philosopher's problem. So if you have a bunch of philosophers. 406 00:50:41.039 --> 00:50:47.159 Sit around the table, you will have. 407 00:50:47.159 --> 00:50:51.269 If everybody grabs support to their right until them all to grab the court to their right and eat. 408 00:50:51.269 --> 00:50:56.880 Well, now everyone's fighting over a couple of forks at the tables. Not necessarily set. 409 00:50:56.880 --> 00:51:04.260 Right. Everyone is now grabbed the port to their right if there's a 4th missing now there's 1 person doesn't know what to do. 410 00:51:04.260 --> 00:51:12.599 And he's just going to keep trying to reach for it and that process will or sorry that thread would consider to be that block meeting. It won't progress. 411 00:51:13.889 --> 00:51:17.519 So there's a couple of ways to fix that 1 would be. 412 00:51:17.519 --> 00:51:21.869 What is called in C is a new tax, which just means. 413 00:51:21.869 --> 00:51:25.409 Okay, only 1 thread can access this part out at a time. 414 00:51:25.409 --> 00:51:32.699 So, if you were to put that part of the code in the text, that would stop it from executing. 415 00:51:32.699 --> 00:51:39.750 There's more than 1 for I try to touch it. So, race conditions. This is a very common and very annoying bug. 416 00:51:41.760 --> 00:51:44.820 So, a race condition. 417 00:51:44.820 --> 00:51:50.280 Is and Fred's you can't guarantee which 1, what finishes 1st. 418 00:51:50.280 --> 00:51:55.829 You can just guarantee that they will finish if, you know, there is no deadlock. 419 00:51:56.940 --> 00:52:00.059 But you can't guarantee which, which finishes 1st. 420 00:52:00.059 --> 00:52:08.369 So, let's say you're a math problem and step you try to paralyze it and step 1. 421 00:52:08.369 --> 00:52:17.309 Except it depends on step 1, but step 2 finishes. 1st you now have a race condition where you're banking on step 1 finishing before step 2. 422 00:52:17.309 --> 00:52:20.340 In that case, you would be better off just. 423 00:52:20.340 --> 00:52:23.730 Doing step 1 then going to step 2. 424 00:52:23.730 --> 00:52:28.349 If you have any race condition in that regard, that becomes a problem. 425 00:52:30.389 --> 00:52:35.039 It's a very tricky bump to find and it's unfortunately very common in software. 426 00:52:37.019 --> 00:52:46.590 So, where you generally want to use, Fred's is 1, it's a small is 1, it's a large task that you want to make small and it's a repetitive tasks. So. 427 00:52:46.590 --> 00:52:50.489 Adding matrices large matrices is a very good. 428 00:52:50.489 --> 00:52:54.960 Use of its large matrix operations is a very good use. 429 00:52:54.960 --> 00:53:01.170 As well other things that you'll see in program and. 430 00:53:01.170 --> 00:53:08.820 As well, to deal with rendering different parts of the screen, if it's a large resolution all at once. 431 00:53:08.820 --> 00:53:13.619 There's other applications that I'm not really going to go into, but friends are. 432 00:53:13.619 --> 00:53:16.739 Part of the basis of parallel computing. 433 00:53:16.739 --> 00:53:20.250 I have a couple of places I got the sources from. 434 00:53:20.250 --> 00:53:24.719 If anyone's curious, man pages are very useful. 435 00:53:24.719 --> 00:53:29.489 This is the 1 for people right correct. But. 436 00:53:31.260 --> 00:53:36.389 This can be found for your command line. This just tells you about what the return is. 437 00:53:37.500 --> 00:53:40.800 Any questions that's really all I have. 438 00:53:43.710 --> 00:53:47.039 You, um. 439 00:53:47.039 --> 00:53:56.070 How hard are they used to use in practice? Did you have any idea? Do they and if you actually have to use them, do they work nicely? Or are they a real pain? 440 00:53:57.025 --> 00:54:09.775 It depends is, are you trying to write it from scratch? Because that tends to be, in my opinion, a little easier. If you have a clunky code base, it's going to be it can be very painful. Especially if you're it's prone to rates to. 441 00:54:11.340 --> 00:54:14.789 What's the word I'm looking for? Risk conditions. 442 00:54:14.789 --> 00:54:21.210 It can be a bit of a pain, but generally worth it for increased performance. 443 00:54:21.210 --> 00:54:31.260 Okay, thank you. Anyone else like to chime in. By the way. In addition to questions, you're all welcome to offer constructive opinions. 444 00:54:31.260 --> 00:54:36.360 If you've used this, you know, share your experience with us, and so on. 445 00:54:36.360 --> 00:54:40.409 Okay, Isaac. 446 00:54:40.409 --> 00:54:47.639 What is MTI? All right so I'm afraid I don't have slides, but, um. 447 00:54:47.639 --> 00:54:58.769 I'm going to be covering, which stands for a message passing interface. So 1st, defacto standard, rather than an official 1 created via organization like the I. Tripoli. 448 00:54:58.769 --> 00:55:03.059 Or any other official body, um, that said it's a. 449 00:55:03.059 --> 00:55:09.750 It's still made by a large group of researchers and interest and industry experts, and it's very widely used and supported. 450 00:55:09.750 --> 00:55:16.320 So, another thing to note is that, um, isn't a library and and of itself, but a specification. 451 00:55:16.320 --> 00:55:30.150 Or a set of standards and methods so there's a lot of different implementations of these specs out there, which are both, uh, open and closed source. So I'm the close to our side of things. We have things like, um, and. 452 00:55:30.150 --> 00:55:34.349 Where the, uh, the companies provide supports of users if there's issues. 453 00:55:34.349 --> 00:55:39.000 And there's also, um, open and and pitch, which provides support, um. 454 00:55:39.000 --> 00:55:42.599 Much in the same way that otter large open source projects. Do. 455 00:55:42.599 --> 00:55:49.050 So, as for what actually does it basically lets you pass data between different processes. 456 00:55:49.050 --> 00:55:54.780 It's designed, so you can call an API to do this, regardless of whether these processes are on the same. 457 00:55:54.780 --> 00:56:00.960 Physical multi core processor, different nodes in a cluster, or even different servers entirely. 458 00:56:00.960 --> 00:56:06.659 So the matter to hardware the user level interface, when you're coding with, should remain the same. 459 00:56:07.710 --> 00:56:18.539 So, in terms of its, a technical functionality, and Chi, um, provides a lot of specifications for different types of communication. So I'll cover 2 of the more fundamental ones. 460 00:56:18.539 --> 00:56:22.289 But there are a few more in addition to these so. 461 00:56:22.289 --> 00:56:31.050 1st, you can send a receive data in the point to point fashion. So, for example, you can explicitly send an array from process 1 to process 100. 462 00:56:31.050 --> 00:56:45.300 And receive a different 1 in return, you can also do, um, collective or global communication where you send a piece of data to every single process, or you can collect and consolidate data from every process into a single 1. 463 00:56:45.300 --> 00:56:52.619 So, a good example of this would be, um, finding the maximum value and a huge data set of engineers. So. 464 00:56:52.619 --> 00:56:56.280 As you can imagine it's much easier to do this in a global operation. 465 00:56:56.280 --> 00:57:01.559 Where you can reduce each processes maximum to a global 1, then it would be to kind of. 466 00:57:01.559 --> 00:57:06.869 Work this out with, like, a number of points point communications. 467 00:57:06.869 --> 00:57:11.280 So, since in some, some problems or programs. 468 00:57:11.280 --> 00:57:21.510 It's much more intuitive or efficient to use different types of communication at different points. Especially when you have specific knowledge of how your processes are going to interact. 469 00:57:21.510 --> 00:57:31.710 So, um, I think Dan discussed this little, but I'll, I'll go into a little more detail about, um, how communication comes into play for computational through dynamics codes. 470 00:57:31.710 --> 00:57:46.110 So, for a, you're always going to divide your physical domain that you're interested in into something that's called a match. So you're cutting up a chunk of space you're interested in into lots of tetrahedron or executions. 471 00:57:46.110 --> 00:57:53.820 Or, for 2, D, this triangles or squares, and it can get into the billions for really big problems. 472 00:57:53.820 --> 00:58:04.530 So, when you're paralyzing the physics solver, this mesh that you have is then split into chunks are partitions. So each process can handle solving for the physics in that area of the domain. 473 00:58:04.530 --> 00:58:13.170 This is like, um, if you're interested in flow through a pipe, this would be cutting a pipe into hundreds of sections and solving for the physics intersections. 474 00:58:13.170 --> 00:58:20.429 So, from a physical standpoint, each part of the mesh only really needs to communicate with the partitions are directly adjacent to it. 475 00:58:20.429 --> 00:58:24.570 And you can communicate that you can reflect this in the code. 476 00:58:24.570 --> 00:58:35.579 In the way each partition communicates with theaters, but in reality, these relationships can get more complex in this. But the fundamental idea of making communication more efficient for your problem. 477 00:58:35.579 --> 00:58:39.150 Um, for a specific problem in your code is, uh. 478 00:58:39.150 --> 00:58:43.019 It's very important, so. 479 00:58:43.019 --> 00:58:52.619 Mti provides specifications for both types of communications that I kind of talk about here, but there's a few work kind of arcane or complex methods. 480 00:58:52.619 --> 00:58:58.559 And again, this is all handled behind the scenes so the user doesn't really have to care about the underlying network. 481 00:58:58.559 --> 00:59:02.070 So, if you're running a job that uses, like, 4 processes. 482 00:59:02.070 --> 00:59:11.190 It doesn't matter whether it's 4 processes on a single machine or 4 processes on 4 different servers. Your source code shouldn't have to change. 483 00:59:11.190 --> 00:59:14.789 So, the level of abstraction should still maintain a. 484 00:59:14.789 --> 00:59:19.619 Really good performance in terms of latency and bandwidth. So, um. 485 00:59:19.619 --> 00:59:25.380 You have to wrap things up you use npi when your program needs your processes to run in concert. 486 00:59:25.380 --> 00:59:28.949 Which is a true for most parallel competing applications. 487 00:59:28.949 --> 00:59:36.150 So these are like, divide and conquer algorithms or just really big problems that require lots of memory are processing power. 488 00:59:36.150 --> 00:59:39.269 So, the advantage of using that you can. 489 00:59:39.269 --> 00:59:43.230 Handle your communication in a really clean way. You don't have to. 490 00:59:43.230 --> 00:59:47.219 Care about communication errors since that's all handled for you. 491 00:59:47.219 --> 00:59:51.510 You can avoid memory to memory copying that you might have with other systems. 492 00:59:51.510 --> 00:59:55.710 And, um, because the specification is standardized. 493 00:59:55.710 --> 00:59:59.760 Vendors and hardware manufacturers can optimize for it. 494 00:59:59.760 --> 01:00:05.039 So, as far as I know there's, um, bindings for a C plus plus and 4 trend. 495 01:00:05.039 --> 01:00:09.900 And it's been used in applications with millions of processes with really good scaling. 496 01:00:09.900 --> 01:00:17.219 So, are there any questions you. 497 01:00:20.760 --> 01:00:25.079 So, I still level, I guess you find it. 498 01:00:26.849 --> 01:00:39.960 Sorry could you repeat that? I can't quite hear you. Oh, my mistake. I forgot to unmute my mic properly. Um, so level you specify you lay out the topology, and you specify, like. 499 01:00:39.960 --> 01:00:47.699 What process communicates with white and so on, I guess their higher level tools I would expect it would build on top of this. That would. 500 01:00:47.699 --> 01:00:51.929 Design the apology for you and be composition. 501 01:00:51.929 --> 01:00:56.130 Um, well, from what I understand. 502 01:00:56.130 --> 01:00:59.550 It it's designed to be kind of like. 503 01:00:59.550 --> 01:01:02.789 Middleware, so you. 504 01:01:02.789 --> 01:01:09.239 So, while it is handling the communication efficiently, the kind of topology of the communication. 505 01:01:09.239 --> 01:01:14.429 Can be really program specific. Um, I imagine there are tools that out there. 506 01:01:14.429 --> 01:01:17.610 What kind of standardized problems, but. 507 01:01:17.610 --> 01:01:21.840 I don't think handles the the kind of. 508 01:01:21.840 --> 01:01:26.250 The Polish your hierarchy of communication. 509 01:01:26.250 --> 01:01:38.820 In a very explicit way, so it's at the middle level then. Okay. Thanks. Anyone else. Okay. Thanks. Sciencey, Blaine why would we use Matlab for parallel processing? 510 01:01:43.860 --> 01:01:47.130 Me a 2nd, trying to set up my screen share right? 511 01:01:47.130 --> 01:01:54.809 Silence. 512 01:01:59.130 --> 01:02:10.409 Okay. Uh, do does everyone see this be fine? Yes. Yes, cool. So, uh, parallel computing in Matlab. 513 01:02:11.545 --> 01:02:23.844 In Matlab has a dedicated toolbox for compare parallel computing and can work with different resources that it has, whether it be just multiple cores, uh, GPU, accessories and entire clusters if necessary. 514 01:02:24.684 --> 01:02:28.434 Also many of the other toolboxes that Matt has have some, um. 515 01:02:28.739 --> 01:02:36.150 Hello, computing aspects built into it outside of the parallel computing toolbox, um, where you can just automatically have, like. 516 01:02:36.150 --> 01:02:39.719 Specific high resource functions. 517 01:02:39.719 --> 01:02:43.679 Off loaded onto other hardware, so. 518 01:02:43.679 --> 01:02:44.844 From a practical standpoint, 519 01:02:44.844 --> 01:02:45.534 some of the more, 520 01:02:45.565 --> 01:02:45.954 um, 521 01:02:45.985 --> 01:02:49.974 easy statements to use parallel program or parallel, 522 01:02:50.304 --> 01:02:50.514 uh, 523 01:02:50.545 --> 01:02:55.074 resources in Matlab are 1st of all parallel for loops or so this, 524 01:02:55.074 --> 01:03:09.715 just if you have sort of similar to the practice statements we had looked at in the previous class this lets you take a for loop with independent iterations and run them off on different workers or notes that you have connected to you, 525 01:03:10.074 --> 01:03:11.244 or just different processes. 526 01:03:11.244 --> 01:03:12.625 If you're just running on a single device. 527 01:03:13.465 --> 01:03:26.905 There's also a background function evaluate where, if you take, if you have a function in Matlab that you want to have evaluated off on a separate thread, you can have that spun off to do it something. 528 01:03:26.905 --> 01:03:28.465 And you can also set it up to be. 529 01:03:28.769 --> 01:03:31.644 Done with different workers entirely uh, 530 01:03:31.675 --> 01:03:32.094 also, 531 01:03:32.094 --> 01:03:36.684 there's ways to run entire Matlab scripts independently called from other scripts, 532 01:03:36.684 --> 01:03:44.244 or the Matlab console and then you can interact with the jobs that you have spun off by waiting for them to complete or loading the outputs from them. 533 01:03:44.244 --> 01:03:47.215 And then eventually, like, allocating the resources when they're done. 534 01:03:48.780 --> 01:03:55.409 So, Matlab also has some support it has a array object and most of the traditional, um. 535 01:03:55.409 --> 01:04:02.789 Cheap matrix matrix functions that you would expect from Matlab also function using a GPU array. Instead. 536 01:04:02.789 --> 01:04:17.039 With a lot of speed ends, Matlab has functions for controlling the GPU resources. Like, figuring out which individual device you're doing gathering the resources back to local memory instead of GPU memory and then just managing the overall. 537 01:04:17.039 --> 01:04:23.550 So, behind the scenes, this is from, uh, some of the documentation. So. 538 01:04:23.550 --> 01:04:31.409 You can use multiple these at the same time. So if you had a Matlab client, you can use batch to run another script. And then even inside of that script, you can. 539 01:04:31.409 --> 01:04:35.130 Dispatch with a parallel for to other Matlab, um. 540 01:04:35.130 --> 01:04:38.519 Processes to further spread out the load of this. 541 01:04:38.519 --> 01:04:50.190 And just a nice graphical example of a couple of these lines being used of a example par for going 1 through 200 getting the Max from this random. 542 01:04:50.190 --> 01:04:58.889 A vector and also another example of using a right where you can use it just like a normal array in any other case. 543 01:04:58.889 --> 01:05:09.300 And most of the functions have support to run their specific operations on the view although some of them do, like, pull it back to local memory to do specific things that don't have that support. 544 01:05:09.300 --> 01:05:17.755 So some of the functions that Matlab has built in support for, and we'll just work are on the f. D fitting linear regression models. Convolution. 545 01:05:17.784 --> 01:05:20.605 I can vector sorting matrix power Matrix, 546 01:05:20.635 --> 01:05:21.355 multiplication, 547 01:05:21.355 --> 01:05:21.775 matrices, 548 01:05:21.775 --> 01:05:22.315 division, 549 01:05:24.264 --> 01:05:26.094 cross products and a lot of the other, 550 01:05:26.094 --> 01:05:26.364 like, 551 01:05:26.364 --> 01:05:28.824 really resource intensive toolboxes tend to use, 552 01:05:28.855 --> 01:05:29.275 um, 553 01:05:29.695 --> 01:05:30.235 support as well, 554 01:05:30.235 --> 01:05:32.065 or at least have the option for you to set that up, 555 01:05:32.425 --> 01:05:32.635 like, 556 01:05:32.635 --> 01:05:35.664 the machine learning toolbox and processing toolbox deep learning toolbox, 557 01:05:35.664 --> 01:05:35.875 signal, 558 01:05:35.875 --> 01:05:38.215 processing toolbox and optimization toolbox. 559 01:05:39.630 --> 01:05:49.650 So, Additionally, Matlab can run on clusters with what's referred to recently as parallel server. Uh, basically, you can set up a pool in this cluster and. 560 01:05:49.650 --> 01:05:54.059 Distribute out to all the different, um, nodes, more specifically. 561 01:05:54.059 --> 01:06:02.340 Here they have a nice little graph that was in the documentation of them, running a matrix multiplication with different numbers of workers. 562 01:06:03.780 --> 01:06:09.954 Just to show the value, I suppose it also is a couple of other, uh, more interesting needs of parallelism. 563 01:06:09.954 --> 01:06:19.914 Like, you can integrate other parallel, uh, code pieces like, who libraries or compile specific Matlab code to CUDA, to run on embedded hardware. You can. 564 01:06:20.219 --> 01:06:24.144 A Matlab has some basic functions for npi support, 565 01:06:24.233 --> 01:06:24.655 uh, 566 01:06:24.925 --> 01:06:35.875 Matt Labs simulation software simulation can also run on parallel clusters and you can process really large data sets that would be outside of memory with what are referred to as and. 567 01:06:36.150 --> 01:06:43.289 Effectively, like, you have a window that is automatically managed by Matlab for pulling in new data and, um. 568 01:06:43.289 --> 01:06:46.440 That you need to have in local memory to process. 569 01:06:46.440 --> 01:06:51.690 So, uh, here just some references that I got from this and, uh, any questions. 570 01:06:56.730 --> 01:07:09.690 Yeah, thank you. Oh, 1, comment is I use Matt lab for some parallel stuff, a while back and hit a license limitation. I couldn't run more than. 571 01:07:09.690 --> 01:07:13.289 I didn't know so many Intel corps at once. 572 01:07:13.289 --> 01:07:23.940 Which was annoying. Yeah. Yeah. Really easy. Sure. Just Matt Labs done the work for you lot of stuff you want to parallel matrix inversion or. 573 01:07:23.940 --> 01:07:29.130 I can value just say in November and so on and yeah. 574 01:07:29.130 --> 01:07:36.179 It's processed in parallel. Cool. Let's see. You do do. 575 01:07:36.179 --> 01:07:41.639 Mark, are you willing to tell us about the top 2 super computers? 576 01:07:41.639 --> 01:07:47.610 Yeah, um, I just need to pull up the presentation. 577 01:07:52.349 --> 01:07:56.250 So, I'll share my screen. 578 01:07:56.250 --> 01:08:00.989 Can you guys hear me? Okay. Yeah, so I can hear you. Fine. Okay. 579 01:08:00.989 --> 01:08:10.800 Presentation. 580 01:08:10.800 --> 01:08:24.840 Okay, so I looked at the top 2, super computers in the world as reported on the. 581 01:08:24.840 --> 01:08:28.229 500 list, um. 582 01:08:28.229 --> 01:08:32.010 The top 500 organization that. 583 01:08:32.010 --> 01:08:38.520 You know, I mean, I know professor Franklin to show us and we all know her. I've seen it, but. 584 01:08:38.520 --> 01:08:42.989 So, they get statistics on I performs computers and. 585 01:08:42.989 --> 01:08:49.859 You know, they focus on things like the amount of cars and memory and. 586 01:08:49.859 --> 01:08:52.859 Speed and all that, but also like the location and. 587 01:08:52.859 --> 01:08:57.840 The organization that runs it and like, what they're doing. 588 01:08:59.220 --> 01:09:02.609 And the main objective is. 589 01:09:02.609 --> 01:09:10.590 To provide a rank list of general purpose systems that are in common use for high end applications. 590 01:09:10.590 --> 01:09:13.619 Um, so. 591 01:09:13.619 --> 01:09:18.119 Something about that, or, I mean, I'll get back to that, but. 592 01:09:18.119 --> 01:09:23.189 So the list is updated twice a year and it's been happening since 90, 93. 593 01:09:23.189 --> 01:09:30.600 In the main benchmark they used to make the rankings is something called the impact Benchmark. 594 01:09:30.600 --> 01:09:34.710 Um, so. 595 01:09:34.710 --> 01:09:40.409 Benchmark is, you know, it's like high performance impact. 596 01:09:40.409 --> 01:09:44.850 That's like a software package. Sorry my dog's barking. 597 01:09:44.850 --> 01:09:48.779 So, what it basically does is. 598 01:09:48.779 --> 01:09:53.100 There's a sense system of linear equations that the. 599 01:09:53.100 --> 01:09:57.720 Solve, um, and. 600 01:09:57.720 --> 01:10:06.479 The organizations and, you know, install the and run a 1000 report, but. 601 01:10:06.479 --> 01:10:10.649 There's plenty of like checking all that to. 602 01:10:10.649 --> 01:10:14.430 Verify the statistics that they are reporting. 603 01:10:14.430 --> 01:10:18.239 So. 604 01:10:18.239 --> 01:10:23.699 Something interesting about it is the top 500 list, and, you know. 605 01:10:23.699 --> 01:10:28.649 In general, it allows the user to scale the size of the problem. 606 01:10:28.649 --> 01:10:33.300 And to optimize the software in order to achieve the best performance. 607 01:10:33.300 --> 01:10:39.060 So, I guess that goes back to the ideas of scalability and all that and. 608 01:10:39.060 --> 01:10:44.340 How, you know, different different. 609 01:10:44.340 --> 01:10:47.579 Problem sizes can utilize the machine. 610 01:10:47.579 --> 01:10:55.739 The best way, and all that. So, let them know, choose their best configuration. 611 01:10:55.739 --> 01:11:00.539 And obviously, you know, this is just 1 test so it can't. 612 01:11:00.539 --> 01:11:06.329 Reflect the overall performance of the system, like, entirely like no, 1 thing can do that. 613 01:11:06.329 --> 01:11:09.390 And so there's another. 614 01:11:09.390 --> 01:11:15.000 Metric that they or benchmark they also use to compliment, which is this. 615 01:11:15.000 --> 01:11:19.260 High performance conjugate gradients or. 616 01:11:19.260 --> 01:11:24.180 And that's basically testing more about like, the, um. 617 01:11:24.180 --> 01:11:31.079 Data data access and, like, um, the interconnect like the computer's. 618 01:11:31.079 --> 01:11:35.460 So, just so. 619 01:11:35.460 --> 01:11:39.569 You know, refresher on units, we all know bites and. 620 01:11:39.569 --> 01:11:44.010 Flops your floating point operations for a 2nd, um. 621 01:11:45.149 --> 01:11:52.260 So, you know, just a refresher on the numbers, like, had a flop for example. 622 01:11:52.260 --> 01:11:57.239 Is what is that is that a 1000. 623 01:11:58.739 --> 01:12:06.420 Oh, it's a 1000 terra flops. Okay. I need to remind myself too. Okay so anyway. 624 01:12:06.420 --> 01:12:10.109 The fastest computer in the world. 625 01:12:10.109 --> 01:12:16.949 It's called supercomputer. Forgot to. I guess it's in Japan. 626 01:12:16.949 --> 01:12:21.899 This table, I just got the data from the top 500. 627 01:12:21.899 --> 01:12:27.239 Out list website, so as you can see, there's. 628 01:12:27.239 --> 01:12:34.590 I go over 7 and a half 1M cores. There's about like, there's over 5000 terabytes of memory. 629 01:12:34.590 --> 01:12:38.250 This is the process where they use a 60. 630 01:12:38.250 --> 01:12:42.869 For f, access and arm based on our architecture. 631 01:12:42.869 --> 01:12:47.579 And so the. 632 01:12:47.579 --> 01:12:53.640 The, there's the actual performance that they got. Oh, I guess I didn't really. 633 01:12:53.640 --> 01:12:58.680 So, our Max and our people, I'll show that here. 634 01:12:58.680 --> 01:13:02.010 There's the actual performance that they achieve. 635 01:13:02.010 --> 01:13:06.630 Which was, you know, almost half a 1M terra flops. 636 01:13:06.630 --> 01:13:11.100 Per 2nd, I mean, I yeah, I guess that's the like. 637 01:13:11.100 --> 01:13:15.960 Statistic that matters, or, you know, the big statistic. 638 01:13:15.960 --> 01:13:23.729 And theoretically, I guess from calculations they figured it. 639 01:13:23.729 --> 01:13:28.739 You know, that's the theoretical peak performance they could achieve and. 640 01:13:28.739 --> 01:13:34.619 This end Max just shows the problem size that they used. 641 01:13:34.619 --> 01:13:40.979 To get our Max and so now that varies. Um. 642 01:13:40.979 --> 01:13:45.000 And they use Red hat as their operating system. 643 01:13:47.909 --> 01:13:55.470 So, I just pulled this quote from the top 500. let's say, I'll just read it. 644 01:13:55.470 --> 01:13:58.890 I forgot who remains at the top spot growing its arm. 645 01:13:58.890 --> 01:14:01.920 60 a 64 FX capacity. 646 01:14:01.920 --> 01:14:05.880 From 7299072 cores. 647 01:14:05.880 --> 01:14:09.270 To 7.6M course. 648 01:14:09.270 --> 01:14:12.390 The additional hardware enables its new world record. 649 01:14:12.390 --> 01:14:16.079 442. 650 01:14:16.079 --> 01:14:22.949 Result on this, puts it 3 times ahead of the number 2 system on the list. 651 01:14:22.949 --> 01:14:26.130 Rebecca was constructed by. 652 01:14:26.130 --> 01:14:31.920 Who just sue and is installed at the right and center for computational science. 653 01:14:31.920 --> 01:14:36.449 And Kobe, Japan, so, yeah, we just use the. 654 01:14:36.449 --> 01:14:39.989 Manufacturer or not manufacturer. Really but the. 655 01:14:39.989 --> 01:14:43.739 This company that may be. 656 01:14:43.739 --> 01:14:48.180 Computer in the corps or a processor, I should say. 657 01:14:48.180 --> 01:14:52.590 They made it in collaboration with arm, I guess. 658 01:14:52.590 --> 01:15:02.850 And so the project was initiated by Japan, Ministry of education, culture, sports, science, and technology. 659 01:15:02.850 --> 01:15:08.279 Sorry about the dog I just started in 2014. 660 01:15:08.279 --> 01:15:14.010 So, a computer number 2, it's called it. 661 01:15:14.010 --> 01:15:18.090 And so manufacturers, IBM. 662 01:15:18.090 --> 01:15:22.470 Oh, so it's not confusing. 663 01:15:22.470 --> 01:15:29.430 These are the stats and then I just put the number 1. I forgot to next to it just for a comparison but. 664 01:15:34.020 --> 01:15:40.800 So. 665 01:15:40.800 --> 01:15:48.569 Can see, there's a lot, you know, your course, 2 and a half 1000000. 666 01:15:48.569 --> 01:15:53.399 Almost 3000 terabytes memory. 667 01:15:53.399 --> 01:15:57.779 They use an IBM power 9 processor. 668 01:15:57.779 --> 01:16:03.659 And to the R, Max. 669 01:16:03.659 --> 01:16:08.729 Terra flops per. 2nd is. 670 01:16:08.729 --> 01:16:11.760 You know, less a little less than half of. 671 01:16:11.760 --> 01:16:16.949 Number 1 supercomputer. So I guess that's the speed. 672 01:16:16.949 --> 01:16:20.909 And they also use Red Hat is their operating system. 673 01:16:20.909 --> 01:16:24.390 So. 674 01:16:25.590 --> 01:16:28.859 This the same little summary. 675 01:16:28.859 --> 01:16:35.100 On the website, I'll, I'll just read it. I'm summit and IBM built system at the Oakridge National Laboratory. 676 01:16:35.100 --> 01:16:39.210 In Tennessee remains the fastest system in the US with a performance. 677 01:16:39.210 --> 01:16:42.810 Of 148.8. 678 01:16:42.810 --> 01:16:50.729 Some, it has 4356 notes each 1 housing, 222 core. 679 01:16:50.729 --> 01:16:54.899 And 6 and video, Tesla, 1, 100. 680 01:16:54.899 --> 01:17:00.449 So, summit was initiated in 2018, you know, after they had. 681 01:17:00.449 --> 01:17:03.869 Other supercomputers as well, but. 682 01:17:03.869 --> 01:17:07.829 I guess some, it was the newest 1. 683 01:17:07.829 --> 01:17:11.939 And it was here by the. 684 01:17:11.939 --> 01:17:16.050 Upgraded leadership competing facility. 685 01:17:16.050 --> 01:17:20.819 Which was established at the Oakridge National laboratory and understanding to. 686 01:17:20.819 --> 01:17:25.289 And that's all I have. I mean, I had some. 687 01:17:25.289 --> 01:17:29.340 More information on, like, the processors that I had in and. 688 01:17:29.340 --> 01:17:34.170 Notes, but I didn't go into that in the presentation. 689 01:17:34.170 --> 01:17:37.979 Um, and cause I also. 690 01:17:37.979 --> 01:17:42.600 Didn't really understand a lot of the differences and specs and stuff. 691 01:17:42.600 --> 01:17:47.130 I didn't get deep into that, but anyway, that's that's all I have. 692 01:17:51.300 --> 01:17:54.899 Is there any questions? Yeah. So thank you. 693 01:17:54.899 --> 01:18:01.170 So, it looks like for the IBM Summit, most of the work's been done by the, um. 694 01:18:02.369 --> 01:18:05.850 25000, you know, in video. 695 01:18:07.140 --> 01:18:15.119 Yeah, so okay. That's a very different, I guess architecture. 696 01:18:15.119 --> 01:18:18.180 Or scheme between the top 2. 697 01:18:18.180 --> 01:18:22.079 Oh, it's competition here. 698 01:18:22.079 --> 01:18:25.199 See, how it's going on. 699 01:18:26.220 --> 01:18:32.069 And again, these are the fastest publicly known machines, you know, we could assume there might be others. 700 01:18:32.069 --> 01:18:36.479 All right anyone else have. 701 01:18:36.479 --> 01:18:43.229 Questions yeah, but the power 9, that's the nice. That's a very nice CPU. So. 702 01:18:43.229 --> 01:18:50.880 It has 1 thing that my machine does not app. Is that the 6 and video? 703 01:18:50.880 --> 01:18:55.260 That are plugged into each power 9. 704 01:18:55.260 --> 01:18:58.739 Can be connected to each other by a very fast bus. 705 01:18:58.739 --> 01:19:05.789 So, they, they can be very closely integrated together with high high speed bus between. So. 706 01:19:05.789 --> 01:19:09.239 Yeah, I guess that goes back to that. Um. 707 01:19:09.239 --> 01:19:13.800 Do you see benchmark or what always call? I think that's kind of what that looks at. 708 01:19:15.659 --> 01:19:19.920 Yeah, because a lot of this, it's the data movement. 709 01:19:19.920 --> 01:19:29.399 The time is greater than the CPU time for many of the problems. So that's why they got this spent a lot of work in high speed buses. So. 710 01:19:29.399 --> 01:19:41.670 Cool Thank you. Well, since I'm talking about high speed buses in anticipation, but 1 of the new NVIDIA gimmicks is they'll do a lossless compression of the data. If they're sending data over a bus. 711 01:19:41.670 --> 01:19:56.515 The do a losses, compression and decompression and their assumption is that they have cycles to spare and the time it takes to compress the data before, putting it on the bus is paid off because they got less data on the bus. So. 712 01:19:56.850 --> 01:20:07.649 Crazy ideas. Okay. Thank you. We're running close to the end of the official class time and people don't like to run late. So we'll do. We've got actually 2. 713 01:20:07.649 --> 01:20:13.590 Or talks, we'll do them Thursday after the eye on cute talks. So. 714 01:20:13.590 --> 01:20:20.640 I'll stay around for a few minutes if there's any if anyone has any questions, we'd like to talk about anything. 715 01:20:21.625 --> 01:20:33.145 Other than that, those of you that are able to register for these I. N. Q. thing it'll be 1130 a m on Thursday that after that we can talk about it or have the last 2 talks for a few minutes. 716 01:20:33.145 --> 01:20:37.015 And again, for the queue thing, you have to register at least a day in advance. 717 01:20:38.069 --> 01:20:43.289 So, other than that. 718 01:20:43.289 --> 01:20:52.020 Have a good week, enjoy the Blizzard. I'm looking out my window. I'm just 10 miles from our Pi at the moment, and it started it started to snow during the class. 719 01:20:52.020 --> 01:20:56.939 Lightly so, if not, that's on my own. 720 01:20:56.939 --> 01:21:01.560 So, my dog was barking at the, we have someone plowing our driveway and. 721 01:21:01.560 --> 01:21:05.340 Oh, okay. 722 01:21:05.340 --> 01:21:08.819 Snow in D. C. Oh, yeah. 723 01:21:09.534 --> 01:21:17.965 But I was down at NSF on kind of 2000 to 2002. I arrive in January 2000. I was staying at a cheap motel and the next morning I could hardly open. 724 01:21:17.965 --> 01:21:26.755 I could hardly push the screen door open and then we'll tell the door opened out because 10 inches of snow or something. That was our record closer back then. 725 01:21:26.755 --> 01:21:41.274 So and DC was using, I was in Arlington, DC was using solar powered snow removal for the small back alleyways and so on albany's through that decade, the good Lord provided the snow. The good Lord will remove it in his own time. 726 01:21:43.529 --> 01:21:48.029 Oh, okay. Any relevant questions. 727 01:21:50.550 --> 01:21:57.210 You know, okay, then goodbye. Okay.