I have a legacy usb 1.1 device which I am updating the software on. My development machine is a Z68-based PC which no longer has UHCI (1.1) ports - it only has two EHCI (2.0) ports. It puts an internal hub between the external connection and the EHCI port, so it effectively looks like my 1.1 device is plugged into a 2.0 hub when connected to this development machine.
Anyway, the software mods I did to the device required that I increase the bulk packet bandwidth of the device from 800kb/s to 920kb/s. And this new rate has always worked just fine on my development machine. But then I plugged the device into the computers we are currently shipping, which are pre-SandyBridge, and the devices internal queue started getting overrun, resulting in lost data. This older computer has 6 UHCI (1.1) ports and 2 EHCI (2.0) ports. Looking at device manager, the device gets connected directly to the UHCI port.
The device worked on these older machines with UHCI ports for several years when the bandwidth was only 800kb/s.
But, if I take an external USB 2.0 hub, plug it into this older machine, and plug our device into the hub, the device no longer gets queue overruns at the new 920kb/s rate!
For some bizarre reason, this USB 1.1 device can get more throughput when it is connected to a USB 2.0 hub than it can if it is directly connected to a UHCI 1.1 port!
If you have any ideas why this might be happening, please let me know. Also looking for any workarounds/hacks that don't involve physical hardware.
* Win7 64bit
* device is using 3 bulk pipes to send 64byte packets of real-time data to the computer
* using WinUSB and overlapped I/O with 12*64 as buffer size, and using 100 of these buffers (increasing buffer size above 12*64 did not seem to improve anything, and sizes below 12*64 made things worse)
* when using a 2.0 hub, I could reduce buffer size to 4*64 with no loss
* no isochronous pipes
* no interrupt pipes
* no other USB traffic during bulk transfer
* device is a low-volume scientific instrument
Thanks for any help!
-todd-