本站已收录 番号和无损神作磁力链接/BT种子 

[GigaCourse.Com] Udemy - A deep understanding of deep learning (with Python intro)

种子简介

种子名称: [GigaCourse.Com] Udemy - A deep understanding of deep learning (with Python intro)
文件类型: 视频
文件数目: 262个文件
文件大小: 16.06 GB
收录时间: 2023-2-8 08:28
已经下载: 3
资源热度: 217
最近下载: 2024-11-21 06:40

下载BT种子文件

下载Torrent文件(.torrent) 立即下载

磁力链接下载

magnet:?xt=urn:btih:96d70819d6df8b8ca4d22a7c33e50811ba5da118&dn=[GigaCourse.Com] Udemy - A deep understanding of deep learning (with Python intro) 复制链接到迅雷、QQ旋风进行下载,或者使用百度云离线下载。

喜欢这个种子的人也喜欢

种子包含的文件

[GigaCourse.Com] Udemy - A deep understanding of deep learning (with Python intro).torrent
  • 01 - Introduction/001 How to learn from this course.mp454.97MB
  • 01 - Introduction/002 Using Udemy like a pro.mp425.66MB
  • 02 - Download all course materials/001 Downloading and using the code.mp433.71MB
  • 02 - Download all course materials/002 My policy on code-sharing.mp43.88MB
  • 03 - Concepts in deep learning/001 What is an artificial neural network.mp429.4MB
  • 03 - Concepts in deep learning/002 How models learn.mp435.36MB
  • 03 - Concepts in deep learning/003 The role of DL in science and knowledge.mp487.75MB
  • 03 - Concepts in deep learning/004 Running experiments to understand DL.mp474.84MB
  • 03 - Concepts in deep learning/005 Are artificial neurons like biological neurons.mp456.29MB
  • 04 - About the Python tutorial/001 Should you watch the Python tutorial.mp49.38MB
  • 05 - Math, numpy, PyTorch/002 Introduction to this section.mp44.45MB
  • 05 - Math, numpy, PyTorch/003 Spectral theories in mathematics.mp443.9MB
  • 05 - Math, numpy, PyTorch/004 Terms and datatypes in math and computers.mp415.83MB
  • 05 - Math, numpy, PyTorch/005 Converting reality to numbers.mp413.44MB
  • 05 - Math, numpy, PyTorch/006 Vector and matrix transpose.mp417.83MB
  • 05 - Math, numpy, PyTorch/007 OMG it's the dot product!.mp419.84MB
  • 05 - Math, numpy, PyTorch/008 Matrix multiplication.mp445.49MB
  • 05 - Math, numpy, PyTorch/009 Softmax.mp470.21MB
  • 05 - Math, numpy, PyTorch/010 Logarithms.mp420.84MB
  • 05 - Math, numpy, PyTorch/011 Entropy and cross-entropy.mp458.76MB
  • 05 - Math, numpy, PyTorch/012 Minmax and argminargmax.mp445.66MB
  • 05 - Math, numpy, PyTorch/013 Mean and variance.mp432.91MB
  • 05 - Math, numpy, PyTorch/014 Random sampling and sampling variability.mp441.27MB
  • 05 - Math, numpy, PyTorch/015 Reproducible randomness via seeding.mp449.13MB
  • 05 - Math, numpy, PyTorch/016 The t-test.mp459.68MB
  • 05 - Math, numpy, PyTorch/017 Derivatives intuition and polynomials.mp432.09MB
  • 05 - Math, numpy, PyTorch/018 Derivatives find minima.mp418.65MB
  • 05 - Math, numpy, PyTorch/019 Derivatives product and chain rules.mp425.85MB
  • 06 - Gradient descent/001 Overview of gradient descent.mp440.06MB
  • 06 - Gradient descent/002 What about local minima.mp425.64MB
  • 06 - Gradient descent/003 Gradient descent in 1D.mp487.82MB
  • 06 - Gradient descent/004 CodeChallenge unfortunate starting value.mp457.01MB
  • 06 - Gradient descent/005 Gradient descent in 2D.mp496.38MB
  • 06 - Gradient descent/006 CodeChallenge 2D gradient ascent.mp427.84MB
  • 06 - Gradient descent/007 Parametric experiments on g.d.mp498.75MB
  • 06 - Gradient descent/008 CodeChallenge fixed vs. dynamic learning rate.mp484.02MB
  • 06 - Gradient descent/009 Vanishing and exploding gradients.mp422.33MB
  • 06 - Gradient descent/010 Tangent Notebook revision history.mp414.79MB
  • 07 - ANNs (Artificial Neural Networks)/001 The perceptron and ANN architecture.mp437.14MB
  • 07 - ANNs (Artificial Neural Networks)/002 A geometric view of ANNs.mp429.84MB
  • 07 - ANNs (Artificial Neural Networks)/003 ANN math part 1 (forward prop).mp432.79MB
  • 07 - ANNs (Artificial Neural Networks)/004 ANN math part 2 (errors, loss, cost).mp437.33MB
  • 07 - ANNs (Artificial Neural Networks)/005 ANN math part 3 (backprop).mp427.97MB
  • 07 - ANNs (Artificial Neural Networks)/006 ANN for regression.mp474.2MB
  • 07 - ANNs (Artificial Neural Networks)/007 CodeChallenge manipulate regression slopes.mp4101.06MB
  • 07 - ANNs (Artificial Neural Networks)/008 ANN for classifying qwerties.mp4130.39MB
  • 07 - ANNs (Artificial Neural Networks)/009 Learning rates comparison.mp4168.64MB
  • 07 - ANNs (Artificial Neural Networks)/010 Multilayer ANN.mp4105.28MB
  • 07 - ANNs (Artificial Neural Networks)/011 Linear solutions to linear problems.mp436.75MB
  • 07 - ANNs (Artificial Neural Networks)/012 Why multilayer linear models don't exist.mp419.28MB
  • 07 - ANNs (Artificial Neural Networks)/013 Multi-output ANN (iris dataset).mp4142.01MB
  • 07 - ANNs (Artificial Neural Networks)/014 CodeChallenge more qwerties!.mp481.86MB
  • 07 - ANNs (Artificial Neural Networks)/015 Comparing the number of hidden units.mp467.58MB
  • 07 - ANNs (Artificial Neural Networks)/016 Depth vs. breadth number of parameters.mp497.7MB
  • 07 - ANNs (Artificial Neural Networks)/017 Defining models using sequential vs. class.mp465.76MB
  • 07 - ANNs (Artificial Neural Networks)/018 Model depth vs. breadth.mp4114.95MB
  • 07 - ANNs (Artificial Neural Networks)/019 CodeChallenge convert sequential to class.mp436.5MB
  • 07 - ANNs (Artificial Neural Networks)/021 Reflection Are DL models understandable yet.mp451.72MB
  • 08 - Overfitting and cross-validation/001 What is overfitting and is it as bad as they say.mp454.3MB
  • 08 - Overfitting and cross-validation/002 Cross-validation.mp449.06MB
  • 08 - Overfitting and cross-validation/003 Generalization.mp413.26MB
  • 08 - Overfitting and cross-validation/004 Cross-validation -- manual separation.mp470.36MB
  • 08 - Overfitting and cross-validation/005 Cross-validation -- scikitlearn.mp4105.84MB
  • 08 - Overfitting and cross-validation/006 Cross-validation -- DataLoader.mp4121.26MB
  • 08 - Overfitting and cross-validation/007 Splitting data into train, devset, test.mp456.26MB
  • 08 - Overfitting and cross-validation/008 Cross-validation on regression.mp426.33MB
  • 09 - Regularization/001 Regularization Concept and methods.mp461.53MB
  • 09 - Regularization/002 train() and eval() modes.mp415.67MB
  • 09 - Regularization/003 Dropout regularization.mp4103.65MB
  • 09 - Regularization/004 Dropout regularization in practice.mp4130.74MB
  • 09 - Regularization/005 Dropout example 2.mp438.12MB
  • 09 - Regularization/006 Weight regularization (L1L2) math.mp449.28MB
  • 09 - Regularization/007 L2 regularization in practice.mp478.5MB
  • 09 - Regularization/008 L1 regularization in practice.mp470.93MB
  • 09 - Regularization/009 Training in mini-batches.mp424.13MB
  • 09 - Regularization/010 Batch training in action.mp476.4MB
  • 09 - Regularization/011 The importance of equal batch sizes.mp451.33MB
  • 09 - Regularization/012 CodeChallenge Effects of mini-batch size.mp483.29MB
  • 10 - Metaparameters (activations, optimizers)/001 What are metaparameters.mp412.39MB
  • 10 - Metaparameters (activations, optimizers)/002 The wine quality dataset.mp4124.62MB
  • 10 - Metaparameters (activations, optimizers)/003 CodeChallenge Minibatch size in the wine dataset.mp4103.54MB
  • 10 - Metaparameters (activations, optimizers)/004 Data normalization.mp445.4MB
  • 10 - Metaparameters (activations, optimizers)/005 The importance of data normalization.mp447.77MB
  • 10 - Metaparameters (activations, optimizers)/006 Batch normalization.mp439.12MB
  • 10 - Metaparameters (activations, optimizers)/007 Batch normalization in practice.mp445.22MB
  • 10 - Metaparameters (activations, optimizers)/008 CodeChallenge Batch-normalize the qwerties.mp439.88MB
  • 10 - Metaparameters (activations, optimizers)/009 Activation functions.mp484.91MB
  • 10 - Metaparameters (activations, optimizers)/010 Activation functions in PyTorch.mp467.03MB
  • 10 - Metaparameters (activations, optimizers)/011 Activation functions comparison.mp470.58MB
  • 10 - Metaparameters (activations, optimizers)/012 CodeChallenge Compare relu variants.mp463.97MB
  • 10 - Metaparameters (activations, optimizers)/013 CodeChallenge Predict sugar.mp489.35MB
  • 10 - Metaparameters (activations, optimizers)/014 Loss functions.mp468.57MB
  • 10 - Metaparameters (activations, optimizers)/015 Loss functions in PyTorch.mp4101.71MB
  • 10 - Metaparameters (activations, optimizers)/016 More practice with multioutput ANNs.mp471.9MB
  • 10 - Metaparameters (activations, optimizers)/017 Optimizers (minibatch, momentum).mp442.22MB
  • 10 - Metaparameters (activations, optimizers)/018 SGD with momentum.mp462.1MB
  • 10 - Metaparameters (activations, optimizers)/019 Optimizers (RMSprop, Adam).mp438.02MB
  • 10 - Metaparameters (activations, optimizers)/020 Optimizers comparison.mp461.81MB
  • 10 - Metaparameters (activations, optimizers)/021 CodeChallenge Optimizers and... something.mp436.55MB
  • 10 - Metaparameters (activations, optimizers)/022 CodeChallenge Adam with L2 regularization.mp439.95MB
  • 10 - Metaparameters (activations, optimizers)/023 Learning rate decay.mp469.09MB
  • 10 - Metaparameters (activations, optimizers)/024 How to pick the right metaparameters.mp425.54MB
  • 11 - FFNs (Feed-Forward Networks)/001 What are fully-connected and feedforward networks.mp412.65MB
  • 11 - FFNs (Feed-Forward Networks)/002 The MNIST dataset.mp488.67MB
  • 11 - FFNs (Feed-Forward Networks)/003 FFN to classify digits.mp4117.29MB
  • 11 - FFNs (Feed-Forward Networks)/004 CodeChallenge Binarized MNIST images.mp428.68MB
  • 11 - FFNs (Feed-Forward Networks)/005 CodeChallenge Data normalization.mp470.98MB
  • 11 - FFNs (Feed-Forward Networks)/006 Distributions of weights pre- and post-learning.mp484.77MB
  • 11 - FFNs (Feed-Forward Networks)/007 CodeChallenge MNIST and breadth vs. depth.mp490.36MB
  • 11 - FFNs (Feed-Forward Networks)/008 CodeChallenge Optimizers and MNIST.mp433.21MB
  • 11 - FFNs (Feed-Forward Networks)/009 Scrambled MNIST.mp460.17MB
  • 11 - FFNs (Feed-Forward Networks)/010 Shifted MNIST.mp457.33MB
  • 11 - FFNs (Feed-Forward Networks)/011 CodeChallenge The mystery of the missing 7.mp453.42MB
  • 11 - FFNs (Feed-Forward Networks)/012 Universal approximation theorem.mp424.22MB
  • 12 - More on data/001 Anatomy of a torch dataset and dataloader.mp4100.77MB
  • 12 - More on data/002 Data size and network size.mp497.23MB
  • 12 - More on data/003 CodeChallenge unbalanced data.mp4117.83MB
  • 12 - More on data/004 What to do about unbalanced designs.mp418.83MB
  • 12 - More on data/005 Data oversampling in MNIST.mp489.28MB
  • 12 - More on data/006 Data noise augmentation (with devset+test).mp476.14MB
  • 12 - More on data/007 Data feature augmentation.mp4114.33MB
  • 12 - More on data/008 Getting data into colab.mp431.93MB
  • 12 - More on data/009 Save and load trained models.mp438.72MB
  • 12 - More on data/010 Save the best-performing model.mp490.08MB
  • 12 - More on data/011 Where to find online datasets.mp428.46MB
  • 13 - Measuring model performance/001 Two perspectives of the world.mp418.86MB
  • 13 - Measuring model performance/002 Accuracy, precision, recall, F1.mp463.72MB
  • 13 - Measuring model performance/003 APRF in code.mp438.19MB
  • 13 - Measuring model performance/004 APRF example 1 wine quality.mp4103MB
  • 13 - Measuring model performance/005 APRF example 2 MNIST.mp494.47MB
  • 13 - Measuring model performance/006 CodeChallenge MNIST with unequal groups.mp459.04MB
  • 13 - Measuring model performance/007 Computation time.mp470.49MB
  • 13 - Measuring model performance/008 Better performance in test than train.mp418.24MB
  • 14 - FFN milestone projects/001 Project 1 A gratuitously complex adding machine.mp425.95MB
  • 14 - FFN milestone projects/002 Project 1 My solution.mp469.82MB
  • 14 - FFN milestone projects/003 Project 2 Predicting heart disease.mp423.67MB
  • 14 - FFN milestone projects/004 Project 2 My solution.mp4155.73MB
  • 14 - FFN milestone projects/005 Project 3 FFN for missing data interpolation.mp419.61MB
  • 14 - FFN milestone projects/006 Project 3 My solution.mp452.94MB
  • 15 - Weight inits and investigations/001 Explanation of weight matrix sizes.mp459.62MB
  • 15 - Weight inits and investigations/002 A surprising demo of weight initializations.mp485.9MB
  • 15 - Weight inits and investigations/003 Theory Why and how to initialize weights.mp473.64MB
  • 15 - Weight inits and investigations/004 CodeChallenge Weight variance inits.mp472.9MB
  • 15 - Weight inits and investigations/005 Xavier and Kaiming initializations.mp496.29MB
  • 15 - Weight inits and investigations/006 CodeChallenge Xavier vs. Kaiming.mp4109.44MB
  • 15 - Weight inits and investigations/007 CodeChallenge Identically random weights.mp465.27MB
  • 15 - Weight inits and investigations/008 Freezing weights during learning.mp488.26MB
  • 15 - Weight inits and investigations/009 Learning-related changes in weights.mp4107.96MB
  • 15 - Weight inits and investigations/010 Use default inits or apply your own.mp410.94MB
  • 16 - Autoencoders/001 What are autoencoders and what do they do.mp421.2MB
  • 16 - Autoencoders/002 Denoising MNIST.mp486.5MB
  • 16 - Autoencoders/003 CodeChallenge How many units.mp4100.01MB
  • 16 - Autoencoders/004 AEs for occlusion.mp4138.2MB
  • 16 - Autoencoders/005 The latent code of MNIST.mp4117.79MB
  • 16 - Autoencoders/006 Autoencoder with tied weights.mp4131.5MB
  • 17 - Running models on a GPU/001 What is a GPU and why use it.mp450.35MB
  • 17 - Running models on a GPU/002 Implementation.mp439.7MB
  • 17 - Running models on a GPU/003 CodeChallenge Run an experiment on the GPU.mp436.94MB
  • 18 - Convolution and transformations/001 Convolution concepts.mp488.41MB
  • 18 - Convolution and transformations/002 Feature maps and convolution kernels.mp453.56MB
  • 18 - Convolution and transformations/003 Convolution in code.mp4165.71MB
  • 18 - Convolution and transformations/004 Convolution parameters (stride, padding).mp427.36MB
  • 18 - Convolution and transformations/005 The Conv2 class in PyTorch.mp475.51MB
  • 18 - Convolution and transformations/006 CodeChallenge Choose the parameters.mp418.97MB
  • 18 - Convolution and transformations/007 Transpose convolution.mp469.38MB
  • 18 - Convolution and transformations/008 Maxmean pooling.mp451.24MB
  • 18 - Convolution and transformations/009 Pooling in PyTorch.mp444.24MB
  • 18 - Convolution and transformations/010 To pool or to stride.mp449.22MB
  • 18 - Convolution and transformations/011 Image transforms.mp4124.68MB
  • 18 - Convolution and transformations/012 Creating and using custom DataLoaders.mp4102.39MB
  • 19 - Understand and design CNNs/001 The canonical CNN architecture.mp423.81MB
  • 19 - Understand and design CNNs/002 CNN to classify MNIST digits.mp4144.84MB
  • 19 - Understand and design CNNs/003 CNN on shifted MNIST.mp441.39MB
  • 19 - Understand and design CNNs/004 Classify Gaussian blurs.mp4176.03MB
  • 19 - Understand and design CNNs/005 Examine feature map activations.mp4251.42MB
  • 19 - Understand and design CNNs/006 CodeChallenge Softcode internal parameters.mp4113.72MB
  • 19 - Understand and design CNNs/007 CodeChallenge How wide the FC.mp490.56MB
  • 19 - Understand and design CNNs/008 Do autoencoders clean Gaussians.mp4128.83MB
  • 19 - Understand and design CNNs/009 CodeChallenge AEs and occluded Gaussians.mp478.57MB
  • 19 - Understand and design CNNs/010 CodeChallenge Custom loss functions.mp498.69MB
  • 19 - Understand and design CNNs/011 Discover the Gaussian parameters.mp4136.65MB
  • 19 - Understand and design CNNs/012 The EMNIST dataset (letter recognition).mp4143.87MB
  • 19 - Understand and design CNNs/013 Dropout in CNNs.mp470.64MB
  • 19 - Understand and design CNNs/014 CodeChallenge How low can you go.mp439.15MB
  • 19 - Understand and design CNNs/015 CodeChallenge Varying number of channels.mp467.29MB
  • 19 - Understand and design CNNs/016 So many possibilities! How to create a CNN.mp49.24MB
  • 20 - CNN milestone projects/001 Project 1 Import and classify CIFAR10.mp436.58MB
  • 20 - CNN milestone projects/002 Project 1 My solution.mp481.26MB
  • 20 - CNN milestone projects/003 Project 2 CIFAR-autoencoder.mp429.25MB
  • 20 - CNN milestone projects/004 Project 3 FMNIST.mp419.42MB
  • 20 - CNN milestone projects/005 Project 4 Psychometric functions in CNNs.mp476.46MB
  • 21 - Transfer learning/001 Transfer learning What, why, and when.mp440.48MB
  • 21 - Transfer learning/002 Transfer learning MNIST - FMNIST.mp478.22MB
  • 21 - Transfer learning/003 CodeChallenge letters to numbers.mp484.89MB
  • 21 - Transfer learning/004 Famous CNN architectures.mp422.26MB
  • 21 - Transfer learning/005 Transfer learning with ResNet-18.mp4128.31MB
  • 21 - Transfer learning/006 CodeChallenge VGG-16.mp420.28MB
  • 21 - Transfer learning/007 Pretraining with autoencoders.mp4135.97MB
  • 21 - Transfer learning/008 CIFAR10 with autoencoder-pretrained model.mp4108.86MB
  • 22 - Style transfer/001 What is style transfer and how does it work.mp416.83MB
  • 22 - Style transfer/002 The Gram matrix (feature activation covariance).mp466.49MB
  • 22 - Style transfer/003 The style transfer algorithm.mp426.71MB
  • 22 - Style transfer/004 Transferring the screaming bathtub.mp4210.35MB
  • 22 - Style transfer/005 CodeChallenge Style transfer with AlexNet.mp450.92MB
  • 23 - Generative adversarial networks/001 GAN What, why, and how.mp438.68MB
  • 23 - Generative adversarial networks/002 Linear GAN with MNIST.mp4121.54MB
  • 23 - Generative adversarial networks/003 CodeChallenge Linear GAN with FMNIST.mp458.54MB
  • 23 - Generative adversarial networks/004 CNN GAN with Gaussians.mp4131.44MB
  • 23 - Generative adversarial networks/005 CodeChallenge Gaussians with fewer layers.mp451.28MB
  • 23 - Generative adversarial networks/006 CNN GAN with FMNIST.mp446.94MB
  • 23 - Generative adversarial networks/007 CodeChallenge CNN GAN with CIFAR.mp443.2MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/001 Leveraging sequences in deep learning.mp463.92MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/002 How RNNs work.mp432.64MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/003 The RNN class in PyTorch.mp489.64MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/004 Predicting alternating sequences.mp4153.76MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/005 CodeChallenge sine wave extrapolation.mp4166.64MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/006 More on RNNs Hidden states, embeddings.mp494.25MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/007 GRU and LSTM.mp4100.32MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/008 The LSTM and GRU classes.mp484.32MB
  • 24 - RNNs (Recurrent Neural Networks) (and GRULSTM)/009 Lorem ipsum.mp4141.61MB
  • 25 - Ethics of deep learning/001 Will AI save us or destroy us.mp423.82MB
  • 25 - Ethics of deep learning/002 Example case studies.mp438.4MB
  • 25 - Ethics of deep learning/003 Some other possible ethical scenarios.mp458.3MB
  • 25 - Ethics of deep learning/004 Will deep learning take our jobs.mp433.82MB
  • 25 - Ethics of deep learning/005 Accountability and making ethical AI.mp461.2MB
  • 26 - Where to go from here/001 How to learn topic _X_ in deep learning.mp417.45MB
  • 26 - Where to go from here/002 How to read academic DL papers.mp4137.3MB
  • 27 - Python intro Data types/001 How to learn from the Python tutorial.mp412.27MB
  • 27 - Python intro Data types/002 Variables.mp441.07MB
  • 27 - Python intro Data types/003 Math and printing.mp435.93MB
  • 27 - Python intro Data types/004 Lists (1 of 2).mp424.85MB
  • 27 - Python intro Data types/005 Lists (2 of 2).mp423.55MB
  • 27 - Python intro Data types/006 Tuples.mp415.4MB
  • 27 - Python intro Data types/007 Booleans.mp446.04MB
  • 27 - Python intro Data types/008 Dictionaries.mp423.24MB
  • 28 - Python intro Indexing, slicing/001 Indexing.mp423.41MB
  • 28 - Python intro Indexing, slicing/002 Slicing.mp429.01MB
  • 29 - Python intro Functions/001 Inputs and outputs.mp413.45MB
  • 29 - Python intro Functions/002 Python libraries (numpy).mp427.96MB
  • 29 - Python intro Functions/003 Python libraries (pandas).mp460.85MB
  • 29 - Python intro Functions/004 Getting help on functions.mp424.8MB
  • 29 - Python intro Functions/005 Creating functions.mp440.14MB
  • 29 - Python intro Functions/006 Global and local variable scopes.mp439.19MB
  • 29 - Python intro Functions/007 Copies and referents of variables.mp410.64MB
  • 29 - Python intro Functions/008 Classes and object-oriented programming.mp460.61MB
  • 30 - Python intro Flow control/001 If-else statements.mp430.16MB
  • 30 - Python intro Flow control/002 If-else statements, part 2.mp453.74MB
  • 30 - Python intro Flow control/003 For loops.mp444.7MB
  • 30 - Python intro Flow control/004 Enumerate and zip.mp458.59MB
  • 30 - Python intro Flow control/005 Continue.mp414.34MB
  • 30 - Python intro Flow control/006 Initializing variables.mp446.46MB
  • 30 - Python intro Flow control/007 Single-line loops (list comprehension).mp444.09MB
  • 30 - Python intro Flow control/008 while loops.mp448.15MB
  • 30 - Python intro Flow control/009 Broadcasting in numpy.mp437.14MB
  • 30 - Python intro Flow control/010 Function error checking and handling.mp476.98MB
  • 31 - Python intro Text and plots/001 Printing and string interpolation.mp447.18MB
  • 31 - Python intro Text and plots/002 Plotting dots and lines.mp428.89MB
  • 31 - Python intro Text and plots/003 Subplot geometry.mp448.72MB
  • 31 - Python intro Text and plots/004 Making the graphs look nicer.mp459.02MB
  • 31 - Python intro Text and plots/005 Seaborn.mp434.31MB
  • 31 - Python intro Text and plots/006 Images.mp471.02MB
  • 31 - Python intro Text and plots/007 Export plots in low and high resolution.mp437.38MB