Skip to content

Commit

Permalink
Project import generated by Copybara.
Browse files Browse the repository at this point in the history
GitOrigin-RevId: ff83882955f1a1e2a043ff4e71278be9d7217bbe
  • Loading branch information
MediaPipe Team authored and chuoling committed May 5, 2021
1 parent ecb5b5f commit a9b643e
Show file tree
Hide file tree
Showing 210 changed files with 5,286 additions and 3,812 deletions.
2 changes: 2 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
gcc-8 g++-8 \
ca-certificates \
curl \
ffmpeg \
Expand All @@ -44,6 +45,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 100 --slave /usr/bin/g++ g++ /usr/bin/g++-8
RUN pip3 install --upgrade setuptools
RUN pip3 install wheel
RUN pip3 install future
Expand Down
8 changes: 5 additions & 3 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -337,6 +337,8 @@ maven_install(
"androidx.test.espresso:espresso-core:3.1.1",
"com.github.bumptech.glide:glide:4.11.0",
"com.google.android.material:material:aar:1.0.0-rc01",
"com.google.auto.value:auto-value:1.6.4",
"com.google.auto.value:auto-value-annotations:1.6.4",
"com.google.code.findbugs:jsr305:3.0.2",
"com.google.flogger:flogger-system-backend:0.3.1",
"com.google.flogger:flogger:0.3.1",
Expand Down Expand Up @@ -367,9 +369,9 @@ http_archive(
)

# Tensorflow repo should always go after the other external dependencies.
# 2021-03-25
_TENSORFLOW_GIT_COMMIT = "c67f68021824410ebe9f18513b8856ac1c6d4887"
_TENSORFLOW_SHA256= "fd07d0b39422dc435e268c5e53b2646a8b4b1e3151b87837b43f86068faae87f"
# 2021-04-30
_TENSORFLOW_GIT_COMMIT = "5bd3c57ef184543d22e34e36cff9d9bea608e06d"
_TENSORFLOW_SHA256= "9a45862834221aafacf6fb275f92b3876bc89443cbecc51be93f13839a6609f0"
http_archive(
name = "org_tensorflow",
urls = [
Expand Down
6 changes: 3 additions & 3 deletions build_desktop_examples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,15 @@
# Script to build/run all MediaPipe desktop example apps (with webcam input).
#
# To build and run all apps and store them in out_dir:
# $ ./build_ios_examples.sh -d out_dir
# $ ./build_desktop_examples.sh -d out_dir
# Omitting -d and the associated directory saves all generated apps in the
# current directory.
# To build all apps and store them in out_dir:
# $ ./build_ios_examples.sh -d out_dir -b
# $ ./build_desktop_examples.sh -d out_dir -b
# Omitting -d and the associated directory saves all generated apps in the
# current directory.
# To run all apps already stored in out_dir:
# $ ./build_ios_examples.sh -d out_dir -r
# $ ./build_desktop_examples.sh -d out_dir -r
# Omitting -d and the associated directory assumes all apps are in the current
# directory.

Expand Down
5 changes: 2 additions & 3 deletions docs/framework_concepts/calculators.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ node {
```

In the calculator implementation, inputs and outputs are also identified by tag
name and index number. In the function below input are output are identified:
name and index number. In the function below input and output are identified:

* By index number: The combined input stream is identified simply by index
`0`.
Expand Down Expand Up @@ -355,7 +355,6 @@ class PacketClonerCalculator : public CalculatorBase {
current_[i].At(cc->InputTimestamp()));
// Add a packet to output stream of index i a packet from inputstream i
// with timestamp common to all present inputs
//
} else {
cc->Outputs().Index(i).SetNextTimestampBound(
cc->InputTimestamp().NextAllowedInStream());
Expand All @@ -382,7 +381,7 @@ defined your calculator class, register it with a macro invocation
REGISTER_CALCULATOR(calculator_class_name).
Below is a trivial MediaPipe graph that has 3 input streams, 1 node
(PacketClonerCalculator) and 3 output streams.
(PacketClonerCalculator) and 2 output streams.
```proto
input_stream: "room_mic_signal"
Expand Down
4 changes: 2 additions & 2 deletions docs/framework_concepts/graphs.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,12 +83,12 @@ Below is an example of how to create a subgraph named `TwoPassThroughSubgraph`.
output_stream: "out3"
node {
calculator: "PassThroughculator"
calculator: "PassThroughCalculator"
input_stream: "out1"
output_stream: "out2"
}
node {
calculator: "PassThroughculator"
calculator: "PassThroughCalculator"
input_stream: "out2"
output_stream: "out3"
}
Expand Down
4 changes: 2 additions & 2 deletions docs/getting_started/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Please verify all the necessary packages are installed.
* Android SDK Build-Tools 28 or 29
* Android SDK Platform-Tools 28 or 29
* Android SDK Tools 26.1.1
* Android NDK 17c or above
* Android NDK 19c or above

### Option 1: Build with Bazel in Command Line

Expand Down Expand Up @@ -111,7 +111,7 @@ app:
* Verify that Android SDK Build-Tools 28 or 29 is installed.
* Verify that Android SDK Platform-Tools 28 or 29 is installed.
* Verify that Android SDK Tools 26.1.1 is installed.
* Verify that Android NDK 17c or above is installed.
* Verify that Android NDK 19c or above is installed.
* Take note of the Android NDK Location, e.g.,
`/usr/local/home/Android/Sdk/ndk-bundle` or
`/usr/local/home/Android/Sdk/ndk/20.0.5594570`.
Expand Down
52 changes: 22 additions & 30 deletions docs/getting_started/android_archive_library.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,34 +37,37 @@ each project.
load("//mediapipe/java/com/google/mediapipe:mediapipe_aar.bzl", "mediapipe_aar")
mediapipe_aar(
name = "mp_face_detection_aar",
name = "mediapipe_face_detection",
calculators = ["//mediapipe/graphs/face_detection:mobile_calculators"],
)
```
2. Run the Bazel build command to generate the AAR.
```bash
bazel build -c opt --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--fat_apk_cpu=arm64-v8a,armeabi-v7a --strip=ALWAYS \
//path/to/the/aar/build/file:aar_name
bazel build -c opt --strip=ALWAYS \
--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--fat_apk_cpu=arm64-v8a,armeabi-v7a \
//path/to/the/aar/build/file:aar_name.aar
```
For the face detection AAR target we made in the step 1, run:
For the face detection AAR target we made in step 1, run:
```bash
bazel build -c opt --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --fat_apk_cpu=arm64-v8a,armeabi-v7a \
//mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:mp_face_detection_aar
bazel build -c opt --strip=ALWAYS \
--host_crosstool_top=@bazel_tools//tools/cpp:toolchain \
--fat_apk_cpu=arm64-v8a,armeabi-v7a \
//mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:mediapipe_face_detection.aar
# It should print:
# Target //mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:mp_face_detection_aar up-to-date:
# bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mp_face_detection_aar.aar
# Target //mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example:mediapipe_face_detection.aar up-to-date:
# bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mediapipe_face_detection.aar
```
3. (Optional) Save the AAR to your preferred location.
```bash
cp bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mp_face_detection_aar.aar
cp bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mediapipe_face_detection.aar
/absolute/path/to/your/preferred/location
```
Expand All @@ -75,7 +78,7 @@ each project.
2. Copy the AAR into app/libs.
```bash
cp bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mp_face_detection_aar.aar
cp bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/aar_example/mediapipe_face_detection.aar
/path/to/your/app/libs/
```
Expand All @@ -92,29 +95,14 @@ each project.
[the face detection tflite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_front.tflite).
```bash
bazel build -c opt mediapipe/mediapipe/graphs/face_detection:mobile_gpu_binary_graph
cp bazel-bin/mediapipe/graphs/face_detection/mobile_gpu.binarypb /path/to/your/app/src/main/assets/
bazel build -c opt mediapipe/graphs/face_detection:face_detection_mobile_gpu_binary_graph
cp bazel-bin/mediapipe/graphs/face_detection/face_detection_mobile_gpu.binarypb /path/to/your/app/src/main/assets/
cp mediapipe/modules/face_detection/face_detection_front.tflite /path/to/your/app/src/main/assets/
```
![Screenshot](../images/mobile/assets_location.png)
4. Make app/src/main/jniLibs and copy OpenCV JNI libraries into
app/src/main/jniLibs.
MediaPipe depends on OpenCV, you will need to copy the precompiled OpenCV so
files into app/src/main/jniLibs. You can download the official OpenCV
Android SDK from
[here](https://github.com/opencv/opencv/releases/download/3.4.3/opencv-3.4.3-android-sdk.zip)
and run:
```bash
cp -R ~/Downloads/OpenCV-android-sdk/sdk/native/libs/arm* /path/to/your/app/src/main/jniLibs/
```
![Screenshot](../images/mobile/android_studio_opencv_location.png)
5. Modify app/build.gradle to add MediaPipe dependencies and MediaPipe AAR.
4. Modify app/build.gradle to add MediaPipe dependencies and MediaPipe AAR.
```
dependencies {
Expand All @@ -136,10 +124,14 @@ each project.
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"
implementation "androidx.camera:camera-lifecycle:$camerax_version"
// AutoValue
def auto_value_version = "1.6.4"
implementation "com.google.auto.value:auto-value-annotations:$auto_value_version"
annotationProcessor "com.google.auto.value:auto-value:$auto_value_version"
}
```
6. Follow our Android app examples to use MediaPipe in Android Studio for your
5. Follow our Android app examples to use MediaPipe in Android Studio for your
use case. If you are looking for an example, a face detection example can be
found
[here](https://github.com/jiuqiant/mediapipe_face_detection_aar_example) and
Expand Down
4 changes: 2 additions & 2 deletions docs/getting_started/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ next section.
4. Install Visual C++ Build Tools 2019 and WinSDK
Go to
[the VisualStudio website](ttps://visualstudio.microsoft.com/visual-cpp-build-tools),
[the VisualStudio website](https://visualstudio.microsoft.com/visual-cpp-build-tools),
download build tools, and install Microsoft Visual C++ 2019 Redistributable
and Microsoft Build Tools 2019.
Expand Down Expand Up @@ -738,7 +738,7 @@ common build issues.
root@bca08b91ff63:/mediapipe# bash ./setup_android_sdk_and_ndk.sh
# Should print:
# Android NDK is now installed. Consider setting $ANDROID_NDK_HOME environment variable to be /root/Android/Sdk/ndk-bundle/android-ndk-r18b
# Android NDK is now installed. Consider setting $ANDROID_NDK_HOME environment variable to be /root/Android/Sdk/ndk-bundle/android-ndk-r19c
# Set android_ndk_repository and android_sdk_repository in WORKSPACE
# Done
Expand Down
2 changes: 1 addition & 1 deletion docs/getting_started/python.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ You can, for instance, activate a Python virtual environment:
$ python3 -m venv mp_env && source mp_env/bin/activate
```

Install MediaPipe Python package and start Python intepreter:
Install MediaPipe Python package and start Python interpreter:

```bash
(mp_env)$ pip install mediapipe
Expand Down
43 changes: 43 additions & 0 deletions docs/getting_started/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,49 @@ linux_opencv/macos_opencv/windows_opencv.BUILD files for your local opencv
libraries. [This GitHub issue](https://github.com/google/mediapipe/issues/666)
may also help.

## Python pip install failure

The error message:

```
ERROR: Could not find a version that satisfies the requirement mediapipe
ERROR: No matching distribution found for mediapipe
```

after running `pip install mediapipe` usually indicates that there is no qualified MediaPipe Python for your system.
Please note that MediaPipe Python PyPI officially supports the **64-bit**
version of Python 3.7 and above on the following OS:

- x86_64 Linux
- x86_64 macOS 10.15+
- amd64 Windows

If the OS is currently supported and you still see this error, please make sure
that both the Python and pip binary are for Python 3.7 and above. Otherwise,
please consider building the MediaPipe Python package locally by following the
instructions [here](python.md#building-mediapipe-python-package).

## Python DLL load failure on Windows

The error message:

```
ImportError: DLL load failed: The specified module could not be found
```

usually indicates that the local Windows system is missing Visual C++
redistributable packages and/or Visual C++ runtime DLLs. This can be solved by
either installing the official
[vc_redist.x64.exe](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0)
or installing the "msvc-runtime" Python package by running

```bash
$ python -m pip install msvc-runtime
```

Please note that the "msvc-runtime" Python package is not released or maintained
by Microsoft.

## Native method not found

The error message:
Expand Down
Binary file modified docs/images/mobile/aar_location.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file modified docs/images/mobile/assets_location.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/mobile/pose_tracking_example.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
2 changes: 1 addition & 1 deletion docs/solutions/face_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Supported configuration options:
```python
import cv2
import mediapipe as mp
mp_face_detction = mp.solutions.face_detection
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils

# For static images:
Expand Down
21 changes: 10 additions & 11 deletions docs/solutions/holistic.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,12 +135,11 @@ another detection until it loses track, on reducing computation and latency. If
set to `true`, person detection runs every input image, ideal for processing a
batch of static, possibly unrelated, images. Default to `false`.

#### upper_body_only
#### model_complexity

If set to `true`, the solution outputs only the 25 upper-body pose landmarks
(535 in total) instead of the full set of 33 pose landmarks (543 in total). Note
that upper-body-only prediction may be more accurate for use cases where the
lower-body parts are mostly out of view. Default to `false`.
Complexity of the pose landmark model: `0`, `1` or `2`. Landmark accuracy as
well as inference latency generally go up with the model complexity. Default to
`1`.

#### smooth_landmarks

Expand Down Expand Up @@ -207,7 +206,7 @@ install MediaPipe Python package, then learn more in the companion
Supported configuration options:

* [static_image_mode](#static_image_mode)
* [upper_body_only](#upper_body_only)
* [model_complexity](#model_complexity)
* [smooth_landmarks](#smooth_landmarks)
* [min_detection_confidence](#min_detection_confidence)
* [min_tracking_confidence](#min_tracking_confidence)
Expand All @@ -219,7 +218,9 @@ mp_drawing = mp.solutions.drawing_utils
mp_holistic = mp.solutions.holistic

# For static images:
with mp_holistic.Holistic(static_image_mode=True) as holistic:
with mp_holistic.Holistic(
static_image_mode=True,
model_complexity=2) as holistic:
for idx, file in enumerate(file_list):
image = cv2.imread(file)
image_height, image_width, _ = image.shape
Expand All @@ -240,8 +241,6 @@ with mp_holistic.Holistic(static_image_mode=True) as holistic:
annotated_image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS)
mp_drawing.draw_landmarks(
annotated_image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS)
# Use mp_holistic.UPPER_BODY_POSE_CONNECTIONS for drawing below when
# upper_body_only is set to True.
mp_drawing.draw_landmarks(
annotated_image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS)
cv2.imwrite('/tmp/annotated_image' + str(idx) + '.png', annotated_image)
Expand Down Expand Up @@ -291,7 +290,7 @@ and the following usage example.

Supported configuration options:

* [upperBodyOnly](#upper_body_only)
* [modelComplexity](#model_complexity)
* [smoothLandmarks](#smooth_landmarks)
* [minDetectionConfidence](#min_detection_confidence)
* [minTrackingConfidence](#min_tracking_confidence)
Expand Down Expand Up @@ -348,7 +347,7 @@ const holistic = new Holistic({locateFile: (file) => {
return `https://cdn.jsdelivr.net/npm/@mediapipe/holistic/${file}`;
}});
holistic.setOptions({
upperBodyOnly: false,
modelComplexity: 1,
smoothLandmarks: true,
minDetectionConfidence: 0.5,
minTrackingConfidence: 0.5
Expand Down
12 changes: 6 additions & 6 deletions docs/solutions/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ nav_order: 30
### [Face Detection](https://google.github.io/mediapipe/solutions/face_detection)

* Face detection model for front-facing/selfie camera:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_front.tflite),
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_front.tflite),
[TFLite model quantized for EdgeTPU/Coral](https://github.com/google/mediapipe/tree/master/mediapipe/examples/coral/models/face-detector-quantized_edgetpu.tflite)
* Face detection model for back-facing camera:
[TFLite model ](https://github.com/google/mediapipe/tree/master/mediapipe/models/face_detection_back.tflite)
[TFLite model ](https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_back.tflite)
* [Model card](https://mediapipe.page.link/blazeface-mc)

### [Face Mesh](https://google.github.io/mediapipe/solutions/face_mesh)
Expand Down Expand Up @@ -49,10 +49,10 @@ nav_order: 30

* Pose detection model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection/pose_detection.tflite)
* Full-body pose landmark model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_full_body.tflite)
* Upper-body pose landmark model:
[TFLite model](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_upper_body.tflite)
* Pose landmark model:
[TFLite model (lite)](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_lite.tflite),
[TFLite model (full)](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_full.tflite),
[TFLite model (heavy)](https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_landmark/pose_landmark_heavy.tflite)
* [Model card](https://mediapipe.page.link/blazepose-mc)

### [Holistic](https://google.github.io/mediapipe/solutions/holistic)
Expand Down
Loading

0 comments on commit a9b643e

Please sign in to comment.