Created Wed, 26 Oct 2011 13:58:48 +0000 by peplin
Wed, 26 Oct 2011 13:58:48 +0000
I'm just getting started with USB programming using the recently released Microchip library on the chipKIT with the network shield. I've tried to learn as much as I can about proper USB programming and it's been good for the most part - however, I'm stuck at a ~75KB/s bulk transfer speed.
Some quick background - I need to create a simple USB device that spits on discrete messages to the host PC as fast as possible. The current strategy is to use minified JSON delimited by newlines sent via a bulk transfer endpoint. (of course this could be made faster using plain binary, but this will do for now).
I've trimmed down the GenericUSB example to test the maximum transfer rate, and created a Python receiver (wrapping libusb) for the host. The code is posted here if someone with more experience has a minute to take a look: < Argh, I can't post URLs to the forum. The code is available at "github / openxc / arduino-transfer-benchmarking" but without the spaces and with the github part completed with the top level domain. Sorry.>
On the chipKIT side, the main loop is pretty simple and is basically this:
// using our own loop seems faster than using the arduino's
handleInput = usb.GenWrite(DATA_ENDPOINT, messageBuffer, messageSize);
In Python it's a similar thing, a look that requests a read of some size until it hits 10MB transferred. This is the actual read function that gets called:
If you download the benchmarkUSB.pde sketch to the chipKIT, then run "python receiver.py" it will read 10MB from the device with various "read request" sizes ranging from 64 bytes (one packet) to 1KB.
It's my understanding that requesting more data from the host at a time will increase throughput as messaging overhead drops - for some reason, this isn't the case for my code. You'll see as the request size increases the throughput actually drops from 75KB/s to even lower numbers.
Any help would be greatly appreciated, as I'm stumped at this point. I don't think the problem is in the Python code, because I've also tried this benchmark from Java (on Android) and had similar results. That code is also in the Github repository - like the Python it's a pretty simple benchmark, it just does a bulk read of a certain size until it hits 10MB.
(Another thing I've noticed is that the
#define USB_SPEED_OPTION USB_FULL_SPEED
line in usb_config.h seems to have no effect. Whether I define low or full speed, the throughput is the same measly 75KB/s.)
Fri, 28 Oct 2011 20:51:26 +0000
Only a thought.
What is the size of you end point buffer and the size of your message? If you message is larger than the end point buffer by a byte you'll get a second transfer for the same message increasing overhead so a smaller message maybe faster than a larger message.
Further, if your can match your payload to your end point buffer size you may maximize speed.
Fri, 28 Oct 2011 21:06:25 +0000
That's a good point - just to make sure it's not a problem right now, I'm only using a 43 byte message with a 64 byte buffer. Each should only trigger one transfer, and really just a single packet.
Wed, 02 Nov 2011 14:29:54 +0000
Doesn't USB send batches of up to 512 KB/sec? Is it speed that is crucial or data volume?
Wed, 02 Nov 2011 14:41:21 +0000
I'm not sure what you mean by batches - the maximum bulk transfer packet size for USB 2.0 is 512 bytes, which might be what you're referring to. I'm stuck with USB 1.1 here, so my packets are limited to 64 bytes.
The data doesn't come all at once, so I'm stuck sending small messages and not one big chunk. This doesn't mean that the host PC can't request more than 1 message's worth of data at a time, though - that's where I'm stuck.
Mon, 06 Feb 2012 19:46:51 +0000
I want to update this thread because I believe I solved the problem!
Jacob, I think your advice was on the right track about matching buffer sizes.
I've summarized the whole issue here (http://christopherpeplin.com/2012/02/bulk-usb-throughput/) and copied the solution below:
The problem ended up being much simpler and had more to do with one of the core design principles of USB. After shelving this issue for a few months, I revisited the problem and something caught my eye.
No matter how many bytes were requested on the host, from 64 to 4096, the read operation only every returned 45 bytes - one message. USB uses 64 byte packets, and it uses a less than 64 byte packet (traditionally but not limited to a zero length packet) to indicate the end of a transfer. Were we causing a lot of extra overhead by ending every transfer after 45 byets?
I padded out the 45 byte test message I was using to 64 bytes, and now it is much faster (70KB/s from Python to 650+ KB/s). Previously, we requested 1024 bytes but got 45, which is less than a full 64 byte packet so of course, the transfer was closed.
In hindsight this seems like a pretty important thing to know about USB, but being new to driver development, it wasn't obvious and I couldn't find any references to how the less-than-max length packet can effect performance elsewhere.
Wed, 08 Feb 2012 02:31:43 +0000
I want to update this thread because I believe I solved the problem! ...snip... In hindsight this seems like a pretty important thing to know about USB, but being new to driver development, it wasn't obvious and I couldn't find any references to how the less-than-max length packet can effect performance elsewhere.
No Chris, its not your fault. It's not your fault, it's not your fault. (Think Goodwill Hunting and this is where you start to cry).
Protocols should not be designed by a committee.