cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
302
Views
1
Helpful
3
Replies

Trex hugepage memory issue

bbarot
Level 1
Level 1

bbarot@trex06.iad0:~/v3.02$ sudo ./_t-rex-64 -i -c 7 --astf --cfg /etc/trex_cfg_593vlan.yaml --mbuf-factor 0.2 -v 7

Starting  TRex v3.02 please wait  ...

Using configuration file /etc/trex_cfg_593vlan.yaml

port limit     :  not configured

port_bandwidth_gb    :  10

port_speed           :  0

port_mtu             :  0

if_mask        : None

prefix              : trex6

is low-end : 0

stack type : 

limit_memory        : 2048

thread_per_dual_if      : 1

if        :  --vdev=net_bonding0,mode=2,slave=00:07.0,slave=00:08.0,mac=52:54:00:ee:ee:04, --vdev=net_bonding1,mode=2,slave=00:09.0,slave=00:0a.0,mac=52:54:00:ee:ee:03, --vdev=net_bonding0,mode=2,slave=00:0b.0,slave=00:0c.0,mac=52:54:00:ef:ee:04, --vdev=net_bonding1,mode=2,slave=00:0d.0,slave=00:0e.0,mac=52:54:00:ef:ee:03,

enable_zmq_pub :  1

zmq_pub_port   :  4500

m_zmq_rpc_port    :  4501

src     : 00:00:00:00:00:00

dest    : 00:00:00:00:00:00

src     : 00:00:00:00:00:00

dest    : 00:00:00:00:00:00

src     : 00:00:00:00:00:00

dest    : 00:00:00:00:00:00

src     : 00:00:00:00:00:00

dest    : 00:00:00:00:00:00

memory per 2x10G ports 

MBUF_64                                   : 81894

MBUF_128                                  : 40950

MBUF_256                                  : 5000

MBUF_512                                  : 64380

MBUF_1024                                 : 64380

MBUF_2048                                 : 64000

MBUF_4096                                 : 100

MBUF_9K                                   : 100

TRAFFIC_MBUF_64                           : 65520

TRAFFIC_MBUF_128                          : 32760

TRAFFIC_MBUF_256                          : 8190

TRAFFIC_MBUF_512                          : 8190

TRAFFIC_MBUF_1024                         : 8190

TRAFFIC_MBUF_2048                         : 32760

TRAFFIC_MBUF_4096                         : 128

TRAFFIC_MBUF_9K                           : 512

MBUF_DP_FLOWS                             : 524288

MBUF_GLOBAL_FLOWS                         : 5120

master   thread  : 0 

rx  thread  : 15 

dual_if : 0

    socket  : 0 

   [   1   2   3   4   5   6   7    

dual_if : 1

    socket  : 1 

   [   8   9   10   11   12   13   14    

CTimerWheelYamlInfo does not exist 

flags           : 18070f00

write_file      : 0

verbose         : 7

realtime        : 1

flip            : 0

cores           : 7

single core     : 0

flow-flip       : 0

no clean close  : 0

zmq_publish     : 1

vlan mode       : 1

client_cfg      : 0

mbuf_cache_disable  : 0

cfg file       

mac file       

out file       

client cfg file : 

duration        : 0

factor          : 1

mbuf_factor     : 0

latency         : 0 pkt/sec

zmq_port        : 4500

telnet_port     : 4501

expected_ports  : 4

tw_bucket_usec  : 20.000000 usec

tw_buckets      : 1024 usec

tw_levels       : 3 usec

port : 0 dst:00:00:00:00:00:00  src:00:00:00:00:00:00 vlan:584

port : 1 dst:00:00:00:00:00:00  src:00:00:00:00:00:00 vlan:593

port : 2 dst:00:00:00:00:00:00  src:00:00:00:00:00:00 vlan:610

port : 3 dst:00:00:00:00:00:00  src:00:00:00:00:00:00 vlan:605

port : 4 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 5 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 6 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 7 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 8 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 9 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 10 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 11 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 12 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 13 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 14 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 15 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 16 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 17 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 18 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 19 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 20 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 21 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 22 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 23 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 24 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 25 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 26 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 27 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 28 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 29 dst:00:00:00:01:00:00  src:00:00:00:00:00:00 vlan:0

port : 30 dst:00 port : 31 dst:0 MBUF_128        MBUF_4096       Total memory    core_list : 0,10,0,1,1,0,0,0,0, 4      7   

5 8      2   

9 13      12   

DPDK args

xx  -l  0,15,1,2,3,EAL: Detected CPU lcores: 16

EAL: Detected NUMA nodes: 1

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/trex6/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: VFIO support initialized

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:07.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:08.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:09.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0a.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0b.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0c.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0d.0 (socket 0)

EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:00:0e.0 (socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

size of interfaces_vdevs 4

===>>>found net_bonding0 8

===>>>found net_bonding1 9

===>>>found net_bonding0 8

===>>>found net_bonding1 9

__eth_bond_slave_add_lock_free(476) - Slave device is already a slave of a bonded device

bond_ethdev_configure(3762) - Failed to add port 0 as slave to bonded device net_bonding0

__eth_bond_slave_add_lock_free(476) - Slave device is already a slave of a bonded device

bond_ethdev_configure(3762) - Failed to add port 1 as slave to bonded device net_bonding0

__eth_bond_slave_add_lock_free(476) - Slave device is already a slave of a bonded device

bond_ethdev_configure(3762) - Failed to add port 2 as slave to bonded device net_bonding1

__eth_bond_slave_add_lock_free(476) - Slave device is already a slave of a bonded device

bond_ethdev_configure(3762) - Failed to add port 3 as slave to bonded device net_bonding1

set driver name net_bonding

driver capability  : TCP_UDP_OFFLOAD  TSO  SLRO

set dpdk queues mode to MULTI_QUE

DPDK devices 10 : 10

-----

0 : vdev 00:07.0

1 : vdev 00:08.0

2 : vdev 00:09.0

3 : vdev 00:0a.0

4 : vdev 00:0b.0

5 : vdev 00:0c.0

6 : vdev 00:0d.0

7 : vdev 00:0e.0

8 : vdev net_bonding0

9 : vdev net_bonding1

-----

Number of ports found: 4

 

 

if_index : 0

driver name : net_bonding

min_rx_bufsize : 0

max_rx_pktlen  : 9728

max_rx_queues  : 16

max_tx_queues  : 16

max_mac_addrs  : 16

rx_offload_capa : 0x226f

tx_offload_capa : 0x9fbf

rss reta_size   : 64

flow_type_rss   : 0x7ef8

tx_desc_max     : 4096

tx_desc_min     : 0

rx_desc_max     : 4096

rx_desc_min     : 0

set slave driver = net_i40e_vf

zmq publisher at: tcp://*:4500

rx_data_q_num : 0

rx_drop_q_num : 0

rx_dp_q_num   : 7

rx_que_total : 7

--   

rx_desc_num_data_q   : 512

rx_desc_num_drop_q   : 4096

rx_desc_num_dp_q     : 512

total_desc           : 3584

--   

tx_desc_num     : 1024

ERROR there is not enough huge-pages memory in your system

EAL: Error - exiting with code: 1

  Cause: Cannot init mbuf pool small-pkt-const

bbarot@trex06.iad0:~/v3.02$ cat /etc/trex_cfg_593vlan.yaml

### Config file generated by dpdk_setup_ports.py ###

 

- port_limit: 12

  version: 2

  interfaces: ['--vdev=net_bonding0,mode=2,slave=00:07.0,slave=00:08.0,mac=52:54:00:ee:ee:04', '--vdev=net_bonding1,mode=2,slave=00:09.0,slave=00:0a.0,mac=52:54:00:ee:ee:03' ,'--vdev=net_bonding0,mode=2,slave=00:0b.0,slave=00:0c.0,mac=52:54:00:ef:ee:04', '--vdev=net_bonding1,mode=2,slave=00:0d.0,slave=00:0e.0,mac=52:54:00:ef:ee:03']

  port_info:

      - ip: 10.111.127.158

        default_gw: 10.111.127.181

        vlan: 584

      - ip: 10.136.192.6

        default_gw: 10.136.192.1

        vlan: 593

      - ip: 10.111.72.49

        default_gw: 10.111.72.39

        vlan: 610

      - ip: 10.136.86.250

        default_gw: 10.136.86.1

        vlan: 605

 

  platform:

      master_thread_id: 0

      latency_thread_id: 15

      dual_if:

        - socket: 0

          threads: [1,2,3,4,5,6,7]

        - socket: 1

          threads: [8,9,10,11,12,13,14]

 

  memory:

        mbuf_64: 81894

        mbuf_128: 40950

        mbuf_256: 5000

        mbuf_512: 64380

        mbuf_1024: 64380

        mbuf_2048: 64000

        mbuf_4096: 100

        mbuf_9k: 100

3 Replies 3

From what i have read TRex requires a significant amount of huge-pages memory to run, especially when using the--mbuf-factor option, which controls the amount of memory allocated for packet buffers. You could increase the huge-pages allocation and adjust the values according to your system's requirements or reduce the mbuf-factor value to a lower number, such as 0.1 or 0.05, to reduce the amount of memory required for packet buffers.

Not sure else but if this does not work create an issue on the repo here --> https://github.com/cisco-system-traffic-generator

Hope this helps.

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

bbarot
Level 1
Level 1

Thanks @bigevilbeard for the quick response. I have allocated 8G memory. And I had checked the free -h and it shows good amount of free memory available 

 

Happy to throw some thoughts in here! The issue is not with the amount of free memory available, but rather with the allocation of huge pages. Huge pages are used by DPDK to improve performance, and TRex requires a certain amount of huge pages to be allocated. I _think_ you can check this by

cat /proc/meminfo | grep HugePages

Maybe try (update for your needs here btw)

echo 1024 > /sys/kernel/mm/hugepagesize
echo 10000 > /sys/kernel/mm/hugepages/1024kB/nr_hugepages

  Hope this helps.

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io