Luís Ribeiro,

Siemens Portugal (Alfragide)


Telecommunication providers often face a very complex problem: how to maximize its Return On Investment (ROI) and keep costumers happy by providing them constantly good QoS levels. To keep costs low operators often accept more customers than theoretically their network resources could accommodate. Normally this is not a problem because most of the customers have a very sparse usage pattern. Even so, in some circumstances, congestion will occur. There are already some well known techniques
of congestion avoidance that minimize (until a certain level) the congestion in the network. The most wide used protocol is TCP. On the network core there are active queue management protocols (Random Early Detection – RED – family [2, 3]) or the simple Drop from Tail (DT) that work on the queues of the Network Elements (NE). These protocols perform well, but they still have a small problem: there must be at least some packet loss to cause protocol actions (TCP) or, instead, the protocol action
is to cause packet loss (RED or DT), which means QoS degradation. Several approaches have been taken to overcome this problem. One of them is to predict the congestion instead of reacting to it. This study provides a snapshot of the current research on congestion prediction in data networks using Data Mining techniques.


Date: 2007-Jul-31     Time: 14:00:00     Room: 336

For more information: