Sharing data between threads

No Comments

Modify data

invariant, definition can be found here.

threads modifying data may break invariant(see the example of changing the doubly linked list)

problematic race condition: typically occur where completing an operation requires modification of two or more distinct pieces of data

data race(will be introduced in the future)

Solutions:

  • only the thread performing a modification can see the intermediate states where the invariants are broken. (mutex)
  • Change the data structure, makes it an indivisible change(lock-free programming).
  • Handle the update as a transaction(like database)

Protecting shared data with mutexes

access the data structure as mutually exclusive — use mutex

The mutex has its own problems: deadlock, protecting too much or too little data.

Besides, pointers may ruin the data protection. Programmers should follow: Don’t pass pointers and references to protected data outside the scope of the lock, whether by returning them from a function, storing them in externally visible memory, or passing them as arguments to user-supplied functions.

Example: a stack shared by multiple threads, empty() and top()

TODO: some options to avoid race conditions

thread-safe stack: see listing 3.5, but need to watch out:

  1. delete some operator/functions
  2. add a mutable class variable mutex
  3. lock_guard in every operator function

std::lock—a function that can lock two or more mutexes at once without risk of deadlock

hierarchical lock

std::unique_lock, more flexible than lock_guard(eg. try_to_lock), automatically unlock(will judge by itself)

std::adopt_lock vs std::defer_lock

std::unique_lock contains a flag to indicate the ownership of the mutex, which increases the cost of this class.

std::call_once and std::once_flag to make sure initialization is done once

std::shared_timed_mutex(C++14), std::shared_mutex(C++17)

Categories: 未分类

Some concepts in Machine Learning

No Comments
ConceptExplanation
Logitshttps://stackoverflow.com/questions/41455101/what-is-the-meaning-of-the-word-logits-in-tensorflow
Cross-entropy
Softmax
MSEMean Squared Error Loss
text corpus
context variablethe encoder transforms an input sequence of variable length into a fixed-shape context variable c
c, and encodes the input sequence information in this context variable
Ablationhttps://en.wikipedia.org/wiki/Ablation_(artificial_intelligence)
pretexttask being solved is not
of genuine interest, but is solved only for the true purpose
of learning a good data representation
pretext taskthe self-supervised learning task solved to learn visual representations, with the aim of using the learned representations or model weights obtained in the process, for the downstream task.

Other terms and their explanations: https://developers.google.com/machine-learning/glossary/

Categories: 未分类

Queue

First we know that the queue model is based on Poisson distribution. Here is three characteristics of Poisson distribution:

  • The experiment consists of counting the number of events that will occur during a specific interval of time or in a specific distance, area, or volume.
  • The probability that an event occurs in a given time, distance, area, or volume is the same.
  • Each event is independent of all other events. For example, the number of people who arrive in the first hour is independent of the number who arrive in any other hour.

Little’s Theorem

N = λT

N=average number of customers

λ=Average arrival rate

T=Average sojourn(stay) time of a customer

Apply the Little’s Theorem to the Network Delay environment

We should also add some notation:

ρ: the line’s utilization factor(we can see that [latex]ρ=\frac{λ}{μ}[/latex] later)

X: average transmission time

[latex]ρ=λX[/latex]

now introduce the model:

M/M/1 Model

Categories: 未分类

Records when constructing the scheduler

  1. How does Resnet code in tensorflow/models be distributed?

High API Estimator

distributed_strategy in utils/misc

2. How to make codes in https://github.com/geifmany/cifar-vgg distributed?

Refer Tensorflow tutorial:

Set a distributed strategy and scope including model construction and model compile

However

VGG uses data augmentation which is in conflict with distribution!

In Keras tutorial, if we use fit_generator method, then we will meet this error:

fit_generator` is not supported for models compiled with tf.distribute.strategy.

Our Tensorflow version is 1.14


ImageDataGenerator tutorial code

If we use ‘manual’ example in the official tutorial, then the training will become wield:

Use single GPU this is the std output:

Above is normal(though different from using fit_generator). Below is the distributed version using mirror strategy, it’s abnormal:

Distributed version stuck in the first epoch and the loss is high for a long time.

Solution:

This issue suggests using tf.data.Dataset.from_generator to deal with the generator.

Categories: 未分类