![](https://secure.gravatar.com/avatar/046730945d4ce3b2f4852be8ad1297df.jpg?s=120&d=mm&r=g)
Hi, Recently, we configured a Radisys ENP-2505 Intel IXP1200 evaluation board (revision C0) running the linux kernel (2.3.99) provided by the Intel IXA SDK. When trying to run the example count application from the Intel IXA SDK 2.01, we noticed the following problems: 1/ when using a 10 mbit hub between the IXF440 ports and the traffic generator (linux host), no data is received in the ENP-2505. Seems like the 10/100 mbit half-/full-duplex autonegotiation does not work properly (10mbit link LED is lit however). 2/ when using a crossover cable between the traffic generator and one of the ports of the ENP-2505 (thus 100 mbit full-duplex), the application seems to work (on that one port of course), but we notice a packet loss (about 20%). The cable is tested between two regular PCs and is fine. Generating packets with ping (without parameters) or with ping -f (more intensive) does not influence the packet loss rate, so sending one or hundred packets each second does not seem to matter. The same problem is encountered with the more advanced examples such as the L3-forwarder, which makes me think the IXA SDK provided Ingress ACEs are not suitable for the ENP boards. Has anyone encountered (and solved) one of these problems? Any help would be greatly appreciated. Kind regards, Tim & Koert
![](https://secure.gravatar.com/avatar/4db408374af6658f2b63ead421e838cf.jpg?s=120&d=mm&r=g)
1/ when using a 10 mbit hub between the IXF440 ports and the traffic generator (linux host), no data is received in the ENP-2505. Seems like the 10/100 mbit half-/full-duplex autonegotiation does not work properly (10mbit link LED is lit however).
Make sure the ports are in promiscuous mode. Is this a hub or a switch? Using a managed switch can really help thing, so you can see exactly whats going into the board. It also might be the board's problem, I've had some problems getting the ports to configure themselves correctly. I think it was an older revision, tho.
2/ when using a crossover cable between the traffic generator and one of the ports of the ENP-2505 (thus 100 mbit full-duplex), the application seems to work (on that one port of course), but we notice a packet loss (about 20%).
How much are you trying to send? With minimally sized packets there's enough inter-packet gap to bring the maximum data throughput down around 80MB/s. If you're packet generator says its send 100MB/s worth of min-sized packets, I would think its lying :-)
The cable is tested between two regular PCs and is fine. Generating packets with ping (without parameters) or with ping -f (more intensive) does not influence the packet loss rate, so sending one or hundred packets each second does not seem to matter.
The ring-buffers, IIRC, are all 128 entries long, and the code can handle well over 100 packets a second.
The same problem is encountered with the more advanced examples such as the L3-forwarder, which makes me think the IXA SDK provided Ingress ACEs are not suitable for the ENP boards.
I've been using the SDK's ingress ace on the ENP-2505 and have seen it handle +240MB/s (+60MB/s in each port) of minimally sized packets without dropping anything. I suspect it can handle more, but we don't have a real packet generator so I have yet yo confirm that. In any event, their Ingress code should meet your needs, especially early in development. Najati
participants (2)
-
Koert Vlaeminck
-
Najati Imam