Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

where can i find ncnn docs? #307

Closed
bobby-chiu opened this issue Mar 26, 2018 · 11 comments
Closed

where can i find ncnn docs? #307

bobby-chiu opened this issue Mar 26, 2018 · 11 comments

Comments

@bobby-chiu
Copy link

I used and tested ncnn for modeling deep learning application, but it is unclear for me to dive into all APIs and framework of ncnn library. Is this available now for public guidance?

@nihui
Copy link
Member

nihui commented Mar 28, 2018

@bobby-chiu
Copy link
Author

@nihui, is there available APIs docs entirely for user ?

@bobby-chiu
Copy link
Author

@nihui, i was confused for some APIs. how can i load model from memory buffer. Code likes as follow
FILE fp = fopen(modelpath.c_str(), "rb");
fread((void
)buffer, 1, size, fp);
load_param_bin(buffer);
load_model(buffer);

is there an overloaded interface such like these, directly loading model from allocated buffer?

@bobby-chiu
Copy link
Author

Another question is read from concatenated model file as given follow
$ cat alexnet.param.bin alexnet.bin > alexnet-all.bin

#include "net.h"
FILE* fp = fopen("alexnet-all.bin", "rb");
net.load_param_bin(fp);
net.load_bin(fp);// i guess this may call net.load_model(fp) ?
fclose(fp);

I tested this manner to load models, but got errors from sample code.
./test_ncnn plane.jpg
find_blob_index_by_name data failed
find_blob_index_by_name prob failed

@nihui
Copy link
Member

nihui commented Mar 28, 2018

// load concatenated model buffer
// without check the return value
buffer += net.load_param_bin(buffer);
buffer += net.load_model(buffer);
// load concatenated model buffer
// check the return value
int nread = 0;
nread = net.load_param_bin(buffer);
if (nread != SIZE_OF_PARAMBIN)
    // return error

buffer += nread;

nread = net.load_model(buffer);
if (nread != SIZE_OF_MODELBIN)
    // return error

buffer += nread;

about second question
yeah, it should be net.load_model(fp)
I will correct it asap.
Use blob index enum value instead of plain string for input() and extract() if you load param file in binary mode.

@bobby-chiu
Copy link
Author

@nihui, how can i get blob index from given name string? i checked the code, what i found is the net protected member function find_blob_index_by_name, but it can not be used outside.

@bobby-chiu
Copy link
Author

for loading concatenated model buffer, i am not sure where can i find the matching member function load_param_bin/load_model with args (void* buffer)? It seems not to be implemented in current released version? did i make a mistake?

@nihui
Copy link
Member

nihui commented Mar 28, 2018

blob index enum can be lookuped up in xxx.id.h file, which is generated during ncnn2mem
There exists the load param bin and load model with unsigned char pointer, that is what you need

@bobby-chiu
Copy link
Author

@nihui, thanks a lot. when i converted caffe model into ncnn, i finally found the blob index enum. Now i can get any blob with enum value. But if given a model loaded in memory buffer, how can i access the top/bottom blob with implicit index. In this manner, i can wrapped my modeling process as general api for any supported model.

@bobby-chiu
Copy link
Author

@nihui, for buffer loaded from load_param_bin(unsigned char*), I cannot find the overloaded api. Is it same to load buffer from load_param??

@bobby-chiu
Copy link
Author

@nihui, i have figured out this problem. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants