Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to fetch/download models #1602

Closed
2er0 opened this issue Jun 8, 2020 · 38 comments · May be fixed by #2286
Closed

Unable to fetch/download models #1602

2er0 opened this issue Jun 8, 2020 · 38 comments · May be fixed by #2286
Labels

Comments

@2er0
Copy link

2er0 commented Jun 8, 2020

Issue Summary

Unable to fetch/download models

Executed Command (if any)

models\getModels.bat

OpenPose Output (if any)

image

Type of Issue

  • Compilation/installation error
  • Execution error

Your System Configuration

  1. OpenPose version: 1.6.0 & 1.5.1 - maybe more - not testet

  2. If Windows system:

    • Portable demo
@Qinzixin
Copy link

Qinzixin commented Jun 8, 2020

I experience similar problem, as the BAT file generates many 503 warnings.
Unable to download the model
is there an alternative?

@bellfeige
Copy link

+1, 502 Bad Gateway error

@jacobideum
Copy link

Same here, I am unable to reach the website. I get the following:

Image

@gineshidalgo99
Copy link
Member

gineshidalgo99 commented Jun 8, 2020

We are trying to get it back up, but due to COVID, we are waiting for CMU to grant access so we can officially enter the actual room where the server is to fix it. We will keep you updated. Sorry for the troubles.

@zoheezus
Copy link

zoheezus commented Jun 9, 2020

@gineshidalgo99 Thanks for the update

@vidheyoza
Copy link

@gineshidalgo99 If someone has a local copy of the files downloaded from the server, can they share it somewhere so that we can keep the work going on till you guys fix this?

@jacobideum
Copy link

@gineshidalgo99 Any estimate on how long it might be before the server is restored/an alternate download is available? Appreciate the help

@gineshidalgo99
Copy link
Member

gineshidalgo99 commented Jun 10, 2020

I was able to finally copy the data out of the servers (although they have fallen again), so this is the temporary workaround until I replace all OpenPose links for Dropbox or G. Drive ones:

Download these 2 links (either G Driver or Dropbox):
G Drive version:
Models: https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh
3rdparty before 2021: https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr
3rdparty for 2021 versions: https://drive.google.com/file/d/1WvftDLLEwAxeO2A-n12g5IFtfLbMY9mG

Dropbox version:
https://www.dropbox.com/s/gpwg0tbsimo0fr5/models.zip
https://www.dropbox.com/s/1kfh7lqb9ptqj0l/3rdparty.zip

And unzip them and copy in the right place the required files. To know where each file goes, follow:
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md (if you are not using the latest OpenPose version, then follow the names used in whatever doc/prerequisites.md file you have!)

Sorry for the troubles!!!

@nicolasugrinovic
Copy link

@gineshidalgo99 Thank you for the quick action, however, those links are down, cannot be found.

@gineshidalgo99
Copy link
Member

I can't believe it, the stupid Dropbox blocked them in a few hours: "Your Dropbox public links have been suspended for generating excessive traffic". I am uploading them to G Drive, should be ready in 1-2 h, i'll update this post then

@gineshidalgo99
Copy link
Member

Links updated! Sorry again..

@gineshidalgo99 gineshidalgo99 added duplicate This issue or pull request already exists and removed duplicate This issue or pull request already exists labels Jun 11, 2020
@zoheezus
Copy link

I was able to finally copy the data out of the servers (although they have fallen again), so this is the temporary workaround until I replace all OpenPose links for Dropbox or G. Drive ones:

Download these 2 links (either G Driver or Dropbox):
G Drive version:
https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr/view?usp=sharing
https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh/view?usp=sharing

How can we make use of these links if we are using OpenPose on Google Colab?

@14790897
Copy link

After two days of searching, I finally found the correct model download link. Thank you all

@BowenTan02
Copy link

BowenTan02 commented Feb 3, 2024

After two days of searching, I finally found the correct model download link. Thank you all

May I ask what is the correct link for the model? Thx!
I have searched for several hours, but all links were either expired or broken.

@14790897
Copy link

14790897 commented Feb 4, 2024

After two days of searching, I finally found the correct model download link. Thank you all

May I ask what is the correct link for the model? Thx! I have searched for several hours, but all links were either expired or broken.

I was able to finally copy the data out of the servers (although they have fallen again), so this is the temporary workaround until I replace all OpenPose links for Dropbox or G. Drive ones:

Download these 2 links (either G Driver or Dropbox):
G Drive version:
Models: https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh
3rdparty before 2021: https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr
3rdparty for 2021 versions: https://drive.google.com/file/d/1WvftDLLEwAxeO2A-n12g5IFtfLbMY9mG

Dropbox version:
https://www.dropbox.com/s/gpwg0tbsimo0fr5/models.zip
https://www.dropbox.com/s/1kfh7lqb9ptqj0l/3rdparty.zip

And unzip them and copy in the right place the required files. To know where each file goes, follow:
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md (if you are not using the latest OpenPose version, then follow the names used in whatever doc/prerequisites.md file you have!)

Sorry for the troubles!!!

@BowenTan02
Copy link

After two days of searching, I finally found the correct model download link. Thank you all

May I ask what is the correct link for the model? Thx! I have searched for several hours, but all links were either expired or broken.

I was able to finally copy the data out of the servers (although they have fallen again), so this is the temporary workaround until I replace all OpenPose links for Dropbox or G. Drive ones:

Download these 2 links (either G Driver or Dropbox): G Drive version: Models: https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh 3rdparty before 2021: https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr 3rdparty for 2021 versions: https://drive.google.com/file/d/1WvftDLLEwAxeO2A-n12g5IFtfLbMY9mG

Dropbox version: https://www.dropbox.com/s/gpwg0tbsimo0fr5/models.zip https://www.dropbox.com/s/1kfh7lqb9ptqj0l/3rdparty.zip

And unzip them and copy in the right place the required files. To know where each file goes, follow: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md (if you are not using the latest OpenPose version, then follow the names used in whatever doc/prerequisites.md file you have!)

Sorry for the troubles!!!

Thank you sooo much! It helped a lot!

@Aastha29P
Copy link

Hi, the above-mentioned links do not work.
Can anyone help me with this issue? Your help is appreciated. Thank you!
I have observed that this link doesn't work: "http://posefs1.perception.cs.cmu.edu/OpenPose/models/"
I am deploying the OpenPose models in my code but getting this error when I run the .sh file.
image

This is my .sh file

------------------------- BODY, FACE AND HAND MODELS -------------------------

Downloading body pose (COCO and MPI), face and hand models

OPENPOSE_URL="http://posefs1.perception.cs.cmu.edu/OpenPose/models/"
POSE_FOLDER="pose/"
FACE_FOLDER="face/"
HAND_FOLDER="hand/"

------------------------- POSE MODELS -------------------------

Body (COCO)

COCO_FOLDER=${POSE_FOLDER}"coco/"
COCO_MODEL=${COCO_FOLDER}"pose_iter_440000.caffemodel"
wget -c ${OPENPOSE_URL}${COCO_MODEL} -P ${COCO_FOLDER}

Alternative: it will not check whether file was fully downloaded

if [ ! -f $COCO_MODEL ]; then

wget ${OPENPOSE_URL}$COCO_MODEL -P $COCO_FOLDER

fi

Body (MPI)

MPI_FOLDER=${POSE_FOLDER}"mpi/"
MPI_MODEL=${MPI_FOLDER}"pose_iter_160000.caffemodel"
wget -c ${OPENPOSE_URL}${MPI_MODEL} -P ${MPI_FOLDER}

"------------------------- FACE MODELS -------------------------"

Face

FACE_MODEL=${FACE_FOLDER}"pose_iter_116000.caffemodel"
wget -c ${OPENPOSE_URL}${FACE_MODEL} -P ${FACE_FOLDER}

"------------------------- HAND MODELS -------------------------"

Hand

HAND_MODEL=$HAND_FOLDER"pose_iter_102000.caffemodel"
wget -c ${OPENPOSE_URL}${HAND_MODEL} -P ${HAND_FOLDER}

@14790897
Copy link

14790897 commented Feb 8, 2024

You need to manually download them.
@Aastha29P


Models: https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh
3rdparty before 2021: https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr
3rdparty for 2021 versions: https://drive.google.com/file/d/1WvftDLLEwAxeO2A-n12g5IFtfLbMY9mG
And unzip them and copy in the right place the required files. To know where each file goes, follow:
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md (if you are not using the latest OpenPose version, then follow the names used in whatever doc/prerequisites.md file you have!)

@Aastha29P
Copy link

Hi @14790897 Thanks for replying!
The links you provided don't work.
Can you provide any other links from where I can download the model?

@14790897
Copy link

14790897 commented Feb 8, 2024

These google drive links are live
image
@Aastha29P

@Aastha29P
Copy link

@14790897 Thanks for clarifying it. Could you tell me where these downloaded files should be placed? As in which directory?
This link: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md is showing a 404 error.

@14790897
Copy link

14790897 commented Feb 8, 2024

So here are two ways to use, the first is to use the official compiled package(Windows Portable Demo) but it is empty inside the models, this time we use the first link's models folder to replace the official empty models folder, the second is to compile from scratch in this case we need to follow the tutorial below to replace the corresponding files.
https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/1_prerequisites.md#windows-prerequisites

Caffe, OpenCV, and Caffe prerequisites:
CMake automatically downloads all the Windows DLLs. Alternatively, you might prefer to download them manually:
Dependencies:
Note: Leave the zip files in 3rdparty/windows/ so that CMake does not try to download them again.
Caffe (if you are not sure which one you need, download the default one):
CUDA Caffe (Default): Unzip as 3rdparty/windows/caffe/.
CPU Caffe: Unzip as 3rdparty/windows/caffe_cpu/.
OpenCL Caffe: Unzip as 3rdparty/windows/caffe_opencl/.
Caffe dependencies: Unzip as 3rdparty/windows/caffe3rdparty/.
OpenCV 4.2.0: Unzip as 3rdparty/windows/opencv/.
@Aastha29P

@BowenTan02
Copy link

@14790897 Thanks for clarifying it. Could you tell me where these downloaded files should be placed? As in which directory? This link: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md is showing a 404 error.

Currently, you can just download the model from the link provided and place corresponding models into folders in the Windows Release of the OpenPose. However, running face and hand keypoints require high GPURAM usage, and using Linux/Colab is preferred if you do not have access to enough GPU on Windows. I am still trying to figure out how to install and run OpenPose on Colab.

@14790897
Copy link

14790897 commented Feb 9, 2024

I've tried many colab scripts but none of them worked. Good luck.

I am still trying to figure out how to install and run OpenPose on Colab.

@BowenTan02
Copy link

I've tried many colab scripts but none of them worked. Good luck.

I am still trying to figure out how to install and run OpenPose on Colab.

Yeah.... I think it is the problem with CUDA/CuDNN/GCC/torch versions on Colab, it is such a struggle to efficiently downgrade these things together...

@Aastha29P
Copy link

Thanks @14790897 @BowenTan02 for your resources and help! I can run my .sh file with the downloaded models.
When I ran my poseDetectVideo.py, the output is very slow. The video moves from one frame to another in 7 seconds. I haven't used the 3rdparty folder in my code. Maybe that is required to optimise the code. Also, where can I use the cmake and 3rdparty features in my code.
Currently, I have poseDetectVideo.py, getModels.sh, data folder (which contains the video file) and models folder (provided in the above messages). I am running these in vs code.
Can anyone guide me how can I optimise the code and how can I leverage cmake and 3rdparty to my code?
poseDetectVideo.py

import the necessary packages

import time

import cv2
import imutils
import numpy as np
from imutils.video import FileVideoStream

fvs = FileVideoStream('data/cam1.mp4', queue_size=1024).start()
time.sleep(1.0)

kernelSize = 7
backgroundHistory = 15

openposeProtoFile = "models/pose/coco/pose_deploy_linevec.prototxt"
openposeWeightsFile = "models/pose/coco/pose_iter_440000.caffemodel"
nPoints = 18

COCO Output Format

keypointsMapping = ['Nose', 'Neck', 'R-Sho', 'R-Elb', 'R-Wr', 'L-Sho', 'L-Elb', 'L-Wr', 'R-Hip', 'R-Knee', 'R-Ank',
'L-Hip', 'L-Knee', 'L-Ank', 'R-Eye', 'L-Eye', 'R-Ear', 'L-Ear']

POSE_PAIRS = [[1, 2], [1, 5], [2, 3], [3, 4], [5, 6], [6, 7],
[1, 8], [8, 9], [9, 10], [1, 11], [11, 12], [12, 13],
[1, 0], [0, 14], [14, 16], [0, 15], [15, 17],
[2, 17], [5, 16]]

index of pafs correspoding to the POSE_PAIRS

e.g for POSE_PAIR(1,2), the PAFs are located at indices (31,32) of output, Similarly, (1,5) -> (39,40) and so on.

mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44],
[19, 20], [21, 22], [23, 24], [25, 26], [27, 28], [29, 30],
[47, 48], [49, 50], [53, 54], [51, 52], [55, 56],
[37, 38], [45, 46]]

colors = [[0, 100, 255], [0, 100, 255], [0, 255, 255], [0, 100, 255], [0, 255, 255], [0, 100, 255],
[0, 255, 0], [255, 200, 100], [255, 0, 255], [0, 255, 0], [255, 200, 100], [255, 0, 255],
[0, 0, 255], [255, 0, 0], [200, 200, 0], [255, 0, 0], [200, 200, 0], [0, 0, 0]]

def getKeypoints(prob_map, thres=0.1):
map_smooth = cv2.GaussianBlur(prob_map, (3, 3), 0, 0)

map_mask = np.uint8(map_smooth > thres)
keypoints_array = []

# find the blobs
contours, _ = cv2.findContours(map_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

# for each blob find the maxima
for cnt in contours:
    blob_mask = np.zeros(map_mask.shape)
    blob_mask = cv2.fillConvexPoly(blob_mask, cnt, 1)
    masked_prob_map = map_smooth * blob_mask
    _, max_val, _, max_loc = cv2.minMaxLoc(masked_prob_map)
    keypoints_array.append(max_loc + (prob_map[max_loc[1], max_loc[0]],))

return keypoints_array

Find valid connections between the different joints of a all persons present

def getValidPairs(generated_output):
validpairs = []
invalidpairs = []
n_interp_samples = 10
paf_score_th = 0.1
conf_th = 0.7
# loop for every POSE_PAIR
for k in range(len(mapIdx)):
# A->B constitute a limb
pafA = generated_output[0, mapIdx[k][0], :, :]
pafB = generated_output[0, mapIdx[k][1], :, :]
pafA = cv2.resize(pafA, (frameWidth, frameHeight))
pafB = cv2.resize(pafB, (frameWidth, frameHeight))

    # Find the keypoints for the first and second limb
    candA = detected_keypoints[POSE_PAIRS[k][0]]
    candB = detected_keypoints[POSE_PAIRS[k][1]]
    nA = len(candA)
    nB = len(candB)

    # If keypoints for the joint-pair is detected
    # check every joint in candA with every joint in candB
    # Calculate the distance vector between the two joints
    # Find the PAF values at a set of interpolated points between the joints
    # Use the above formula to compute a score to mark the connection valid

    if nA != 0 and nB != 0:
        valid_pair = np.zeros((0, 3))
        for i in range(nA):
            max_j = -1
            max_score = -1
            found = 0
            for j in range(nB):
                # Find d_ij
                d_ij = np.subtract(candB[j][:2], candA[i][:2])
                norm = np.linalg.norm(d_ij)
                if norm:
                    d_ij = d_ij / norm
                else:
                    continue
                # Find p(u)
                interp_coord = list(zip(np.linspace(candA[i][0], candB[j][0], num=n_interp_samples),
                                        np.linspace(candA[i][1], candB[j][1], num=n_interp_samples)))
                # Find L(p(u))
                paf_interp = []
                for k in range(len(interp_coord)):
                    paf_interp.append([pafA[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))],
                                       pafB[int(round(interp_coord[k][1])), int(round(interp_coord[k][0]))]])
                # Find E
                paf_scores = np.dot(paf_interp, d_ij)
                avg_paf_score = sum(paf_scores) / len(paf_scores)

                # Check if the connection is valid
                # If the fraction of interpolated vectors aligned with PAF is higher then threshold -> Valid Pair
                if (len(np.where(paf_scores > paf_score_th)[0]) / n_interp_samples) > conf_th:
                    if avg_paf_score > max_score:
                        max_j = j
                        max_score = avg_paf_score
                        found = 1
            # Append the connection to the list
            if found:
                valid_pair = np.append(valid_pair, [[candA[i][3], candB[max_j][3], max_score]], axis=0)

        # Append the detected connections to the global list
        validpairs.append(valid_pair)
    else:  # If no keypoints are detected
        invalidpairs.append(k)
        validpairs.append([])
return validpairs, invalidpairs

This function creates a list of keypoints belonging to each person

For each detected valid pair, it assigns the joint(s) to a person

def getPersonwiseKeypoints(validpairs, invalidpairs):
# the last number in each row is the overall score
personwise_keypoints = -1 * np.ones((0, 19))

for k in range(len(mapIdx)):
    if k not in invalidpairs:
        partAs = validpairs[k][:, 0]
        partBs = validpairs[k][:, 1]
        indexA, indexB = np.array(POSE_PAIRS[k])

        for i in range(len(validpairs[k])):
            found = 0
            person_idx = -1
            for j in range(len(personwise_keypoints)):
                if personwise_keypoints[j][indexA] == partAs[i]:
                    person_idx = j
                    found = 1
                    break

            if found:
                personwise_keypoints[person_idx][indexB] = partBs[i]
                personwise_keypoints[person_idx][-1] += keypoints_list[partBs[i].astype(int), 2] + validpairs[k][i][
                    2]

            # if find no partA in the subset, create a new subset
            elif not found and k < 17:
                row = -1 * np.ones(19)
                row[indexA] = partAs[i]
                row[indexB] = partBs[i]
                # add the keypoint_scores for the two keypoints and the paf_score
                row[-1] = sum(keypoints_list[validpairs[k][i, :2].astype(int), 2]) + validpairs[k][i][2]
                personwise_keypoints = np.vstack([personwise_keypoints, row])
return personwise_keypoints

fgbg = cv2.createBackgroundSubtractorMOG2(history=backgroundHistory, detectShadows=True)
kernel = np.ones((kernelSize, kernelSize), np.uint8)

while fvs.more():
frame = fvs.read()
frame = imutils.resize(frame, width=960)

frameClone = frame.copy()

frameWidth = frame.shape[1]
frameHeight = frame.shape[0]

# Fix the input Height and get the width according to the Aspect Ratio
inHeight = 368
inWidth = int((inHeight / frameHeight) * frameWidth)
inpBlob = cv2.dnn.blobFromImage(frame, 1.0 / 255, (inWidth, inHeight), (0, 0, 0), swapRB=False, crop=False)

net = cv2.dnn.readNetFromCaffe(openposeProtoFile, openposeWeightsFile)

net.setInput(inpBlob)
output = net.forward()

# Applying background subtraction on the capture frame
# frame = fgbg.apply(frame)

detected_keypoints = []
keypoints_list = np.zeros((0, 3))
keypoint_id = 0
threshold = 0.1

for part in range(nPoints):
    probMap = output[0, part, :, :]
    probMap = cv2.resize(probMap, (frame.shape[1], frame.shape[0]))
    keypoints = getKeypoints(probMap, threshold)

    keypoints_with_id = []
    for i in range(len(keypoints)):
        keypoints_with_id.append(keypoints[i] + (keypoint_id,))
        keypoints_list = np.vstack([keypoints_list, keypoints[i]])
        keypoint_id += 1

    detected_keypoints.append(keypoints_with_id)

#    for i in range(nPoints):
#        for j in range(len(detected_keypoints[i])):
#            cv2.circle(frame, detected_keypoints[i][j][0:2], 5, colors[i], -1, cv2.LINE_AA)
#    cv2.imshow("Keypoints", frame)

valid_pairs, invalid_pairs = getValidPairs(output)
personwiseKeypoints = getPersonwiseKeypoints(valid_pairs, invalid_pairs)

for i in range(17):
    for n in range(len(personwiseKeypoints)):
        index = personwiseKeypoints[n][np.array(POSE_PAIRS[i])]
        if -1 in index:
            continue
        B = np.int32(keypoints_list[index.astype(int), 0])
        A = np.int32(keypoints_list[index.astype(int), 1])
        cv2.line(frame, (B[0], A[0]), (B[1], A[1]), colors[i], 2, cv2.LINE_AA)

frame = cv2.addWeighted(frameClone, 0.5, frame, 0.5, 0.0)

cv2.imshow("Frame", frame)
k = cv2.waitKey(50) & 0xff
if k == 27:
    break

do a bit of cleanup

cv2.destroyAllWindows()
fvs.stop()

getModels.sh

------------------------- BODY, FACE AND HAND MODELS -------------------------

Setting paths to the downloaded models

OPENPOSE_Models="models/"
POSE_FOLDER="models/pose/"
FACE_FOLDER="models/face/"
HAND_FOLDER="models/hand/"

------------------------- POSE MODELS -------------------------

Body (COCO)

COCO_MODEL="${OPENPOSE_Models}pose/coco/pose_iter_440000.caffemodel"

Body (MPI)

MPI_MODEL="${OPENPOSE_Models}pose/mpi/pose_iter_160000.caffemodel"

------------------------- FACE MODELS -------------------------

Face

FACE_MODEL="${OPENPOSE_Models}face/pose_iter_116000.caffemodel"

------------------------- HAND MODELS -------------------------

Hand

HAND_MODEL="${OPENPOSE_Models}hand/pose_iter_102000.caffemodel"

I have pasted the 3rdParty folder to my directory but I don't know how to use that and cmake in my code.
Your help is appreciated. Thanks!

@14790897
Copy link

@Aastha29P For some reason you didn't start your GPU. I've given up on this project because it's too hard to configure this thing

KWNahyun added a commit to KWNahyun/openpose that referenced this issue Mar 16, 2024
As stated at issue CMU-Perceptual-Computing-Lab#1602 currently model link has been broken.
This commit addresses the problem by suggesting alternative model link
that maintainer provided.

So, in here, we used the supplementary package 'gdown' to download
google drive files.
KWNahyun added a commit to KWNahyun/openpose that referenced this issue Mar 16, 2024
As stated at issue CMU-Perceptual-Computing-Lab#1602 currently model link has been broken.
This commit addresses the problem by suggesting alternative model link
that maintainer provided.

So, in here, we used the supplementary package 'gdown' to download
google drive files.
@Decide02
Copy link

I was able to finally copy the data out of the servers (although they have fallen again), so this is the temporary workaround until I replace all OpenPose links for Dropbox or G. Drive ones:

Download these 2 links (either G Driver or Dropbox): G Drive version: Models: https://drive.google.com/file/d/1QCSxJZpnWvM00hx49CJ2zky7PWGzpcEh 3rdparty before 2021: https://drive.google.com/file/d/1mqPEnqCk5bLMZ3XnfvxA4Dao7pj0TErr 3rdparty for 2021 versions: https://drive.google.com/file/d/1WvftDLLEwAxeO2A-n12g5IFtfLbMY9mG

Dropbox version: https://www.dropbox.com/s/gpwg0tbsimo0fr5/models.zip https://www.dropbox.com/s/1kfh7lqb9ptqj0l/3rdparty.zip

And unzip them and copy in the right place the required files. To know where each file goes, follow: https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/prerequisites.md (if you are not using the latest OpenPose version, then follow the names used in whatever doc/prerequisites.md file you have!)

Sorry for the troubles!!!

My Local Environments(Ubuntu 20.04 / RTX 3090 / driver 535 / CUDA 11.8 / cuDNN 8.4.1)
I simply assumed it was a CUDA or cuDNN version issue since Openpose has been around for a while and my GPU is closer to the newer ones.

So, I went all the way to the Chinese site and lowered it down to the 2021 release, but in the end, it was a matter of installing the models separately, and I can see the skeleton in the demo video properly!

As an added bonus, I've found that it works just fine in my current local environment without having to install it until 21 years later. Thank you!

@amit26112000
Copy link

as a CUDA or cuDNN version issue since Openpose has been around for a while and my GPU is closer to the newer ones.

So, I went all the way to the Chinese site and lowered it d

Can you tell me exactly how you did it ?

@Decide02
Copy link

Decide02 commented Apr 15, 2024

as a CUDA or cuDNN version issue since Openpose has been around for a while and my GPU is closer to the newer ones.
So, I went all the way to the Chinese site and lowered it d

Can you tell me exactly how you did it ?

The first problem that occurred to me was that the inside of the json file is empty even if I run OpenPose demo code, and the first procedure as below

./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output_json_folder/

but, I couldn't see any skeleton appearing. At first, I thought it was a CUDA compatibility problem because OpenPose was released a long time ago, so I changed the versions of Nvidia driver, CUDA, and cuDNN several times. This is the Chinese website I mentioned below. The site below also used the same RTX 3090 as me, and it worked as I used Nvidia Driver-470, CUDA 11.4, and cuDNN 8.2.4.15.

https://blog.csdn.net/yxdayd/article/details/119780910

In conclusion, it was my mistake due to the absence of the models file, not the CUDA version issue. I have confirmed that both my existing CUDA version and the CUDA version mentioned on the Chinese site work experimentally without any problems. The above version is just for reference if you need it, and if you need help, please be specific.
Thank you!

@amit26112000
Copy link

https://blog.csdn.net/yxdayd/article/details/119780910 this is not opening

@Decide02
Copy link

https://blog.csdn.net/yxdayd/article/details/119780910 this is not opening

I'm sorry I'm not familiar with github. It's an external link, so you can enter it yourself, or I modified the hyperlink this time and now it will work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.