• 검색 결과가 없습니다.

Department of Electrical and Computer Engineering Seoul National University

N/A
N/A
Protected

Academic year: 2022

Share "Department of Electrical and Computer Engineering Seoul National University"

Copied!
39
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

Performance Measures and Application Performance

Kyunghan Lee

Networked Computing Lab (NXC Lab)

Department of Electrical and Computer Engineering Seoul National University

https://nxc.snu.ac.kr

kyunghanlee@snu.ac.kr

(2)

What is network performance?

□ Two fundamental measures:

§ 1. Bandwidth

• Roughly: bits transferred in unit time

• Not quite the Electrical Engineering definition

• Also known as Throughput

§ 2. Latency

• Time for 1 message to traverse the network

• Half the Round Trip Time (RTT)

• Also known as Delay

Kevin Fall and Steve McCanne, “You Don't Know Jack about Network Performance,” ACM Queue, pp. 54-59, May 2005.

(3)

3

TCP congestion control:

Additive Increase Multiplicative Decrease (AIMD)

§ Approach: sender increases transmission rate (window size), probing for usable bandwidth, until loss occurs

additive increase: increase cwnd by 1 MSS (max segment size) every RTT until loss detected

multiplicative decrease: cut cwnd in half after loss

cwnd :TC P send er co ng es tio n w in do w s ize

AIMD’s saw tooth behavior: probing

for bandwidth

additively increase window size …

…. until loss occurs (then cut window in half)

time

(4)

TCP Congestion Control

§ sender limits transmission:

§ cwnd is dynamic, function of perceived network congestion

TCP sending rate:

§ roughly: send cwnd bytes, wait RTT for ACKs, then send more bytes

last byte

ACKed sent, not-

yet ACKed (“in-flight”)

last byte sent cwnd

LastByteSent-LastByteAcked ≤ cwnd

sender sequence number space

Rate ≈ cwnd

RTT bytes/sec

(5)

5

□ cwnd and ACK

§ “ACK clocking”: the sender transmits a data packet

upon an ACK reception

□ Control the size of the sliding window

§ Suppose that receiver buffer size is sufficiently large (i.e., large rwnd)

§ Controlling the window size will determine the TX rate (è congestion control).

Source: http://www.soi.wide.ad.jp/class/20020032/

Control Loop in TCP

(6)

6

Time to transfer an object

(Ignoring queuing delay, processing, etc.)

Time(to(transfer(an(object(

(Ignoring(queuing(delay,(processing,(etc.)(

Slides(adapted(from(Prof.(Roscoe( 13(

Time(to(transfer(an(object(

(Ignoring(queuing(delay,(processing,(etc.)(

15(

e.g.(email(

message(

Slides(adapted(from(Prof.(Roscoe(

Digital photography Email message

Typing a character

(7)

7

What’s the throughput?

What’s the transfer time?

Throughput = (Transfer size) / (Transfer time)

Transfer time = RTT + (Transfer size) / Bandwidth

Request + first byte delay

0.2s + 8Mb/1Gbs = 0.208s 8Mb / 0.208s = ~38.5Mb/s ??

Example: request 1MB file over a 1Gb/s link,

with 200ms RTT

(8)

What’s gone wrong here?

□ File is too small?

□ Round-trip time is too high?

□ You can’t reduce the latency (propagation delay).

§ Adding bandwidth won’t really help.

(9)

9

Bandwidth-Delay Product (BDP)

□ Example: Latency = 200ms, Bandwidth = 40Gb/s

§ ⇒ “channel memory” = 8Gb, or 1 gigabyte

□ BDP: What the sender can send before receiver sees anything

§ Or must send to keep the network pipe full...

Bandwidth7Delay(product(

•  Example:(Latency(=(200ms,(Bandwidth(=(40Gb/s(

–  ⇒(“channel(memory”(=(8Gb,(or(1(gigabyte(

•  What(the(sender(can(send(before(receiver(sees(

anything(

–  Or(must(send(to(keep(the(pipe(full…(

Delay( 22(

Bandwidth(

Slides(adapted(from(Prof.(Roscoe(

Bandwidth

Delay

(10)

TCP Window Size

□ Recall TCP keeps data around for transmission!

§ e.g., consider sliding window of 𝑤 bytes

§ TCP moves 𝑤 bytes every RTT (needs to wait for ACK)

⇒ throughput = 𝑤 / RTT

□ What’s the optimal window size 𝑤?

§ Need to keep the pipe completely full

⇒ 𝑤 = RTT × pipe bandwidth

§ Example: 10Gb/s, 200ms RTT

• Gives: 𝑤 ~ 200MB per connection...

□ Protocol limits:

§ TCP window size without scaling ≤ 64kB

§ TCP window size with RFC1323 scaling ≤ 1GB

(11)

11

What’s gone wrong here?

□ File is too small?

□ Round-trip time is too high?

□ You can’t reduce the latency.

§ Adding bandwidth won’t really help.

Lesson 1 : effectively using high bandwidth-delay product networks is hard.

Lesson 2 : as bandwidth increases, latency is more

important for performance.

(12)

What are these high-BDP networks?

□ Where is high bandwidth?

§ Datacenters! 100Gbps or More (Ethernet or InfiniBand)

§ 5G! Targeting 10Gbps as its peak data rate

□ Where is high delay?

§ Between datacenters: 200ms not unusual

§ 3G or 4G networks: higher than 50ms, 500ms not unusual

§ Wireless networks: many reasons, as we will see

(13)

13

Protocol Design impacts Performance

□ Protocols which require many RTTs don’t work well in the wide area.

□ Example: Opening a network folder in Windows 2000

§ About 80 request/response pairs on average

§ 200ms RTT (e.g. London-Redmond)

⇒ more than 16 seconds delay

□ Upgrading your network will not help!

§ (and didn’t...)

(14)

Application Requirements

□ Some applications can’t use all the network

□ Example: constant-bit-rate 320kbs MP3 streaming

□ Utility function : measure of how useful each network resource (bandwidth, latency, etc.) is to an application Applica<on(u<lity(func<ons(

29(

“U<lity”(

Bandwidth(

Infinitely(large(file(

transfer(

Slides(adapted(from(Prof.(Roscoe(

Infinitely large file transfer

Applica<on(u<lity(func<ons(

30(

“U<lity”(

Bandwidth(

Real(file(transfer(

Slides(adapted(from(Prof.(Roscoe(

Most real file transfers

Applica<on(u<lity(func<ons(

31(

“U<lity”(

Bandwidth(

Constant(bit(rate(stream(

(e.g.(audio,(video)(

Slides(adapted(from(Prof.(Roscoe(

Applica<on(u<lity(func<ons(

32(

“U<lity”(

Bandwidth(

Network7coded(

variable7bit7rate(

mul<media(

Slides(adapted(from(Prof.(Roscoe(

Constant bit rate stream (e.g. audio, video)

variable-bit-rate

multimedia

(15)

15

□ Many applications are “bursty”

§ Required network bandwidth varies over time

□ Challenge: describe and analyze bursty sources

§ Token buckets, leaky buckets, queuing theory...

Application Requirements: Burstiness

(16)

WIRELESS NETWORKING, 430.752B, 2020 SPRING SEOUL NATIONAL UNIVERSITY

Variance in end-to-end latency (or RTT)

□ Example: voice (telephony)

□ How long to wait to play a received packet?

§ Too long: hard to have a conversation (150ms limit)

§ Too short: some packets arrive too late (lose audio)

Applica<on(requirements:(JiFer(

•  Variance)(in(end7to7end(latency((or(RTT)(

•  Example:(voice((telephony)(

•  How(long(to(wait(to(play(a(received(packet?(

–  Too(long:(hard(to(have(a(conversa<on((150ms(limit)(

–  Too(short:(some(packets(arrive(too(late((lose(audio)(

Network(

1( 2( 3( 4( 5( 1( 2( 3( 4( 5(

Equally7spaced((

Application Requirements: Jitter

(17)

17

□ Networks do, sometimes, lose packets.

□ Loss is complex:

§ Packet loss rate, Bit-error rate

□ Losses and errors must be detected:

§ Codes, sequence numbers, checksums

□ Handled by:

§ Error-correction

§ Retransmission

Application Requirements: Loss

(18)

Source: Tom Leighton, “Improving Performance on the Internet,” Communications of the ACM, February 2009.

Applica<on(requirements:(Loss((

36(

20(hrs.(

0.4Mbs((poor)(

1.4%(

96ms(

Mul<7con<nent(

~10000km(

8.2(hrs.(

1Mbs((not(quite(

1.0%( TV)(

48ms(

Cross7con<nent(

~4800km(

2.2(hrs.(

4Mbs((not(quite(

DVD)(

0.7%(

16ms(

Regional:(

80071600km(

12(min.(

44Mb/s((HDTV)(

0.6%(

1.6ms(

Local:(<160(km(

4GB$DVD$

download$Cme$

Throughput$

(quality)$

Typical$packet$loss$

Network$latency$

Distance$from$

server$to$user$

Source:(“Improving(Performance(on(the(Internet”,(Tom(

Leighton,(Communica<ons(of(the(ACM,((

February(2009.(

Slides(adapted(from(Prof.(Roscoe(

Application Requirements: Perspectives

(19)

How to Read a Paper

S. Keshav

You May Think You Already Know How To Read, But…

(20)

http://www.sigcomm.org/sites/default/files/ccr/papers/2007/July/1273445-1273458.pdf

(21)

21

You Spend a Lot of Time Reading

□ Reading for grad classes

□ Reviewing conference submissions

□ Giving colleagues feedback

□ Keeping up with your field

□ Staying broadly educated

□ Transitioning into a new areas

□ Learning how to write better papers J

It is worthwhile to learn to read effectively.

(22)

Keshav’s Three-Pass Approach: Pass 1

□ A ten-minute scan to get the general idea

§ Title, abstract, and introduction

§ Section and subsection titles

§ Conclusion

§ Bibliography

□ What to learn: the five C’s

§ Category: What type of paper is it?

§ Context: What body of work does it relate to?

§ Correctness: Do the assumptions seem valid?

§ Contributions: What are the main research contributions?

§ Clarity: Is the paper well-written?

□ Decide whether to read further…

(23)

23

Keshav’s Three-Pass Approach: Pass 2

□ A more careful, one-hour reading

§ Read with greater care, but ignore details like proofs

§ Figures, diagrams, and illustrations

§ Mark relevant references for later reading

□ Grasp the content of the paper

§ Be able to summarize the main idea

§ Identify whether you can (or should) fully understand

□ Decide whether to

§ Abandon reading in greater depth

§ Read background material before proceeding further

§ Persevere and continue for a third pass

(24)

Keshav’s Three-Pass Approach: Pass 3

□ Several-hour virtual re-implementation of the work

§ Making the same assumptions, recreate the work

§ Identify the paper’s innovations and its failings

§ Identify and challenge every assumption

§ Think how you would present the ideas yourself

§ Write down ideas for future work

□ When should you read this carefully?

§ Reviewing for a conference or journal

§ Giving colleagues feedback on a paper

§ Understanding a paper closely related to your research

§ Deeply understanding a classic paper in the field

(25)

25

Other Tips for Reading Papers

□ Read at the right level for what you need

§ “ Work smarter, not harder”

□ Read at the right time of day and in the right place

§ When you are fresh, not sleepy

§ Where you are not distracted, and have enough time

□ Read actively

§ With a purpose (what is your goal?)

§ With a pen or computer to take notes

□ Read critically

§ Think, question, challenge, critique, …

(26)

Performance Measures and Application Performance

Kyunghan Lee

Networked Computing Lab (NXC Lab)

Department of Electrical and Computer Engineering Seoul National University

https://nxc.snu.ac.kr

kyunghanlee@snu.ac.kr

(27)

2

Performance: Bandwidth? Latency?

[Pearson Scott Foresman]

To get many megabits-per-second …

4400 km to Fly è 80 km/h with 1 TB USB stick = 40 Mbps 6

... but 55 hours

(28)

WIRELESS NETWORKING, 430.752B, 2020 SPRING SEOUL NATIONAL UNIVERSITY

3

Bandwidth and Latency

Bandwidth and latency

Transfer time

Network round-trip time 0

Small object Large object

(log scale)

10

Bandwidt h and lat enc y Tra ns fer time N etw ork ro und -tri p time 0

Sm all ob La rg e ob (lo g sca le)

10

Why Flow-Completion Time is the Right Metric

for Congestion Control

Nandita Dukkipati

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nanditad@stanford.edu

Nick McKeown

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nickm@stanford.edu

ABSTRACT

Users typically want their flows to complete as quickly as possible. This makes Flow Completion Time (FCT) an im- portant - arguably the most important - performance met- ric for the user. Yet research on congestion control focuses almost entirely on maximizing link throughput, utilization and fairness, which matter more to the operator than the user. In this paper we show that with typical Internet flow sizes, existing (TCP Reno) and newly proposed (XCP) con- gestion control algorithms make flows last much longer than necessary - often by one or two orders of magnitude. In contrast, we show how a new and practical algorithm - RCP (Rate Control Protocol) - enables flows to complete close to the minimum possible.

1. WHY WE SHOULD MAKE FLOWS COMPLETE QUICKLY

When users download a web page, transfer a file, send/read email, or involve the network in almost any interaction, they want their transaction to complete in the shortest time; and therefore, they want the shortest possible flow completion

time (FCT).

1

They care less about the throughput of the

network, how efficiently the network is utilized, or the la- tency of individual packets; they just want their flow to complete as fast as possible. Today, most transactions are of this type and it seems likely that a significant amount of

traffic will be of this type in the future [1]

2

. So it is perhaps

surprising that almost all work on congestion control focuses on metrics such as throughput, bottleneck utilization and fairness. While these metrics are interesting – particularly for the network operator – they are not very interesting to the user; in fact, high throughput or efficient network utiliza- tion is not necessarily in the user’s best interest. Certainly, as we will show, these metrics are not sufficient to ensure a quick FCT.

Intuition suggests that as network bandwidth increases flows should finish proportionally faster. For the current Internet, with TCP, this intuition is wrong. Figure 1 shows how improvements in link bandwidth have not reduced FCT by much in the Internet over the past 25 years. With a 100- fold increase in bandwidth, FCT has reduced by only 50%

for typical downloads. While propagation delay will always

1FCT = time from when the first packet of a flow is sent (in TCP,

this is the SYN packet) until the last packet is received.

2Real-time streaming of audio and video are the main exceptions, but

they represent a tiny fraction of traffic.

1 10 100

1 10 100

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80] 90 Mbps [’81] 417 Mbps [’86] 1.7 Gbps [’88]

2.5 Gbps [’91]

10 Gbps [’97]

flow size = 100 MB 10 MB 1 MB Equal improvement in FCT and bandwidth

Figure 1: Improvement in flow-completion time as a function of link bandwidth for the Internet, nor- malized to 45 Mbps introduced in 1980. Flows have an RTT of 40 ms and complete in TCP slow-start.

Plot inspired by Patterson’s illustration of how la- tency lags bandwidth in computer systems [2].

place a lower bound, FCT is dominated by TCP’s congestion control mechanisms which make flows last multiple RTTs even if a flow is capable of completing within one round-trip time (RTT).

So can we design congestion control algorithms that make flows finish quickly? Unfortunately, it is not usually possi- ble to provably minimize the FCT for flows in a general net- work, even if their arrival times and durations are known [3].

Worse still, in a real network flows come and go unpre- dictably and different flows take different paths! It is in- tractable to minimize FCT. So instead congestion control algorithms are focussed on efficiently using bottleneck link (and only for long-lived flows) because this is easier to achieve.

But we believe - and it is the main argument of this paper - that instead of being deterred by the complexity of the problem, we should find algorithms that come close to min- imizing FCTs, even if they are heuristic.

A well-known and simple method that comes close to min- imizing FCT is for each router to use processor-sharing (PS) – a router divides outgoing link bandwidth equally among

FCT impr ov ement

Bandwidth improvement FCT improvement

Why Flow-Completion Time is the Right Metric for Congestion Control

Nandita Dukkipati

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nanditad@stanford.edu

Nick McKeown

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nickm@stanford.edu

ABSTRACT

Users typically want their flows to complete as quickly as possible. This makes Flow Completion Time (FCT) an im- portant - arguably the most important - performance met- ric for the user. Yet research on congestion control focuses almost entirely on maximizing link throughput, utilization and fairness, which matter more to the operator than the user. In this paper we show that with typical Internet flow sizes, existing (TCP Reno) and newly proposed (XCP) con- gestion control algorithms make flows last much longer than necessary - often by one or two orders of magnitude. In contrast, we show how a new and practical algorithm - RCP (Rate Control Protocol) - enables flows to complete close to the minimum possible.

1. WHY WE SHOULD MAKE FLOWS

COMPLETE QUICKLY

When users download a web page, transfer a file, send/read email, or involve the network in almost any interaction, they want their transaction to complete in the shortest time; and therefore, they want the shortest possible flow completion

time (FCT).1 They care less about the throughput of the

network, how efficiently the network is utilized, or the la- tency of individual packets; they just want their flow to complete as fast as possible. Today, most transactions are of this type and it seems likely that a significant amount of

traffic will be of this type in the future [1]2. So it is perhaps

surprising that almost all work on congestion control focuses on metrics such as throughput, bottleneck utilization and fairness. While these metrics are interesting – particularly for the network operator – they are not very interesting to the user; in fact, high throughput or efficient network utiliza- tion is not necessarily in the user’s best interest. Certainly, as we will show, these metrics are not sufficient to ensure a quick FCT.

Intuition suggests that as network bandwidth increases flows should finish proportionally faster. For the current Internet, with TCP, this intuition is wrong. Figure 1 shows how improvements in link bandwidth have not reduced FCT by much in the Internet over the past 25 years. With a 100- fold increase in bandwidth, FCT has reduced by only 50%

for typical downloads. While propagation delay will always

1FCT = time from when the first packet of a flow is sent (in TCP,

this is the SYN packet) until the last packet is received.

2Real-time streaming of audio and video are the main exceptions, but

they represent a tiny fraction of traffic.

1 10 100

1 10 100

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80] 90 Mbps [’81] 417 Mbps [’86] 1.7 Gbps [’88]

2.5 Gbps [’91]

10 Gbps [’97] flow size = 100 MB

10 MB 1 MB Equal improvement in FCT and bandwidth

Figure 1: Improvement in flow-completion time as a function of link bandwidth for the Internet, nor- malized to 45 Mbps introduced in 1980. Flows have an RTT of 40 ms and complete in TCP slow-start.

Plot inspired by Patterson’s illustration of how la- tency lags bandwidth in computer systems [2].

place a lower bound, FCT is dominated by TCP’s congestion control mechanisms which make flows last multiple RTTs even if a flow is capable of completing within one round-trip time (RTT).

So can we design congestion control algorithms that make flows finish quickly? Unfortunately, it is not usually possi- ble to provably minimize the FCT for flows in a general net- work, even if their arrival times and durations are known [3].

Worse still, in a real network flows come and go unpre- dictably and different flows take different paths! It is in- tractable to minimize FCT. So instead congestion control algorithms are focussed on efficiently using bottleneck link (and only for long-lived flows) because this is easier to achieve.

But we believe - and it is the main argument of this paper - that instead of being deterred by the complexity of the problem, we should find algorithms that come close to min- imizing FCTs, even if they are heuristic.

A well-known and simple method that comes close to min- imizing FCT is for each router to use processor-sharing (PS) – a router divides outgoing link bandwidth equally among

ACM SIGCOMM Computer Communication Review 59 Volume 36, Number 1, January 2006

Why Flow-Completion Time is the Right Metric for Congestion Control

Nandita Dukkipati

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nanditad@stanford.edu

Nick McKeown

Computer Systems Laboratory Stanford University Stanford, CA 94305-9025

nickm@stanford.edu

ABSTRACT

Users typically want their flows to complete as quickly as possible. This makes Flow Completion Time (FCT) an im- portant - arguably the most important - performance met- ric for the user. Yet research on congestion control focuses almost entirely on maximizing link throughput, utilization and fairness, which matter more to the operator than the user. In this paper we show that with typical Internet flow sizes, existing (TCP Reno) and newly proposed (XCP) con- gestion control algorithms make flows last much longer than necessary - often by one or two orders of magnitude. In contrast, we show how a new and practical algorithm - RCP (Rate Control Protocol) - enables flows to complete close to the minimum possible.

1. WHY WE SHOULD MAKE FLOWS

COMPLETE QUICKLY

When users download a web page, transfer a file, send/read email, or involve the network in almost any interaction, they want their transaction to complete in the shortest time; and therefore, they want the shortest possible flow completion

time (FCT).1 They care less about the throughput of the

network, how efficiently the network is utilized, or the la- tency of individual packets; they just want their flow to complete as fast as possible. Today, most transactions are of this type and it seems likely that a significant amount of

traffic will be of this type in the future [1]2. So it is perhaps

surprising that almost all work on congestion control focuses on metrics such as throughput, bottleneck utilization and fairness. While these metrics are interesting – particularly for the network operator – they are not very interesting to the user; in fact, high throughput or efficient network utiliza- tion is not necessarily in the user’s best interest. Certainly, as we will show, these metrics are not sufficient to ensure a quick FCT.

Intuition suggests that as network bandwidth increases flows should finish proportionally faster. For the current Internet, with TCP, this intuition is wrong. Figure 1 shows how improvements in link bandwidth have not reduced FCT by much in the Internet over the past 25 years. With a 100- fold increase in bandwidth, FCT has reduced by only 50%

for typical downloads. While propagation delay will always

1FCT = time from when the first packet of a flow is sent (in TCP,

this is the SYN packet) until the last packet is received.

2Real-time streaming of audio and video are the main exceptions, but

they represent a tiny fraction of traffic.

1 10 100

1 10 100

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80] 90 Mbps [’81] 417 Mbps [’86] 1.7 Gbps [’88]

2.5 Gbps [’91]

10 Gbps [’97] flow size = 100 MB

10 MB 1 MB Equal improvement in FCT and bandwidth

Figure 1: Improvement in flow-completion time as a function of link bandwidth for the Internet, nor- malized to 45 Mbps introduced in 1980. Flows have an RTT of 40 ms and complete in TCP slow-start.

Plot inspired by Patterson’s illustration of how la- tency lags bandwidth in computer systems [2].

place a lower bound, FCT is dominated by TCP’s congestion control mechanisms which make flows last multiple RTTs even if a flow is capable of completing within one round-trip time (RTT).

So can we design congestion control algorithms that make flows finish quickly? Unfortunately, it is not usually possi- ble to provably minimize the FCT for flows in a general net- work, even if their arrival times and durations are known [3].

Worse still, in a real network flows come and go unpre- dictably and different flows take different paths! It is in- tractable to minimize FCT. So instead congestion control algorithms are focussed on efficiently using bottleneck link (and only for long-lived flows) because this is easier to achieve.

But we believe - and it is the main argument of this paper - that instead of being deterred by the complexity of the problem, we should find algorithms that come close to min- imizing FCTs, even if they are heuristic.

A well-known and simple method that comes close to min- imizing FCT is for each router to use processor-sharing (PS) – a router divides outgoing link bandwidth equally among

ACM SIGCOMM Computer Communication Review 59 Volume 36, Number 1, January 2006

ACM CCR, 2006

Wh y Flo w-Completion Time is the Right Metric for Cong estion Contr ol Nandita Dukkipati Computer Systems Labor ator y Stanf ord Univ ersity Stanf ord, CA 94305-9025 nanditad@stanf ord.edu

Nic k McK eo wn Computer Systems Labor ator y Stanf ord Univ ersity Stanf ord, CA 94305-9025 nic km@stanf ord.edu ABSTRA CT Us er s ty p ic a ll y w a n t th ei r fl ow s to co m p le te a s q u ic k ly a s po ss ib le . T h is m a k es F lo w C o m p le ti o n T im e (F C T ) a n im - po rt a n t - a rg u a b ly th e m o st im po rt a n t - pe rf o rm a n ce m et - ric for th e u ser. Y et researc h o n con gest ion con tr ol fo cu ses al m os t en ti re ly on m ax im iz in g li n k th rou gh p u t, u ti li zat ion an d fairn ess, wh ic h m at te r m ore to th e op erat or th an th e us er . In thi s p a p er w e sh ow tha t w ith ty pi ca l In te rne t fl ow si ze s, ex ist in g (T C P R en o ) a n d n ew ly p ro p o se d (X C P ) co n - gest ion con tr ol algorit h m s m ak e fl ow s last m u ch lon ger th a n ne ce ss a ry - o ft en b y o ne o r tw o o rde rs o f m a g ni tude . In con trast , w e sh ow h ow a n ew an d p ract ical algorit h m - R CP (R a te C o n tr o l P ro to co l) - en a b le s fl ow s to co m p le te cl o se to th e m in im u m p o ss ib le . 1. WHY WE SHOULD MAKE FLO WS COMPLETE Q UICKL Y Whe n us er s d ow nl o a d a w eb pa g e, tra ns fe r a fil e, se nd/ re a d email, or in v o lv e th e n et w ork in a lmost a n y in te ract ion , th ey wa n t th ei r tr a n sa ct io n to co m p le te in th e sh o rt es t ti m e; a n d th erefore, th ey w a n t th e sh o rtest p ossib le flow com pl eti on ti m e (F C T ).

1

Th ey ca re le ss a b o u t th e th ro u g h p u t o f th e ne tw o rk , ho w effi cien tly th e n et w o rk is u tilized , o r th e la- te nc y o f indi vi dua l pa ck ets ; the y jus t w a n t the ir flo w to comp let e as fast as p o ssib le. T o d ay , most tr an sact ion s are of th is ty p e an d it seem s lik ely th a t a sign ifi can t am ou n t of tr a ffi cw il l b e o f th is ty p e in th e fu tu re [1 ]

2

.S o it is p er h a p s su rp ri si n g th a t a lmo st a ll w o rk o n co n g es ti o n co n tro l fo cu se s on m et ri cs su ch as th rou gh p u t, b ot tl en ec k u ti li zat ion an d fairn ess. Wh ile th ese m etrics are in terestin g – p articu larly for th e n et w ork o p erat o r – th ey are n ot v ery in te rest in g to the u ser; in fa ct, h ig h thro u g hput o r effi cien t n et w o rk utiliza - tion is n o t n ecessarily in th e u ser’s b est in terest. Certain ly , as w e w il l sh ow , th es e m et ri cs ar e n ot su ffi ci en t to en su re a qu ic k F C T . In tui ti o n sug g es ts tha t a s n et w o rk ba ndw id th in cr ea se s flo w s sho ul d fini sh pr o p o rt io na ll y fa st er . F o r the cur re n t In te rn et , w it h T C P , th is in tu it io n is w ro n g . F ig u re 1 sh ow s ho w impro v eme n ts in li n k b a ndw id th ha v e no t re duc ed F C T by m u ch in th e Int er n et ov er th e p a st 2 5 y ea rs . W it h a 1 0 0 - fo ld in cr ea se in b a n d wi d th , F CT h a s re d u ce d b y o n ly 5 0 % for ty p ical d own load s. W h ile p rop agat ion d ela y will alw ay s

1 FCT=timefromwhenthefirstpacketofaflowissent(inTCP, thisistheSYNpacket)untilthelastpacketisreceived. 2 Real-timestreamingofaudioandvideoarethemainexceptions,but theyrepresentatinyfractionoftraffic.

1 10

100 1 10 100

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80]90 Mbps [’81]417 Mbps [’86]1.7 Gbps [’88] 2.5 Gbps [’91]

10 Gbps [’97]

flow size = 100 MB 10 MB 1 MB Equal improvement in FCT and bandwidth

Fig u re 1 : Im p ro v emen t in fl o w -c o m p let io n time a s af u n c ti o n o f li n k b a n d w id th fo r th e In te rn e t, n o r- ma li z e d to 4 5 M b p s in tr o d u c e d in 1 9 8 0 . F lo w s h a v e an R T T o f 4 0 m s an d c o m p le te in T C P sl o w -s tar t. P lo t in sp ir e d b y P at te rs o n ’s il lu st rat io n o f h o w la- te ncy lags b andwi d th in com pute r sy ste m s [2] . p la ce a lo w er b o u n d , F C T is d o mi n a te d b y T C P ’s co n g es ti o n co n tr o l m ec h a n is m s w h ic h m a k e fl ow s la st m u lt ip le R T T s ev en if a fl ow is ca p a b le o f co m p let in g w it h in o n e ro u n d -t ri p ti me (R T T ). So ca n w e de si g n co ng es ti o n co n tr o l a lg o ri thm s tha t m a k e flo w s fini sh qui ckl y? Unf o rtuna te ly , it is no t us ua ll y p o ss i- bl e to p ro va bl y m in im iz e the F C T fo r flo w s in a g en er a l ne t- wo rk , ev en if th ei r a rr iv a l ti m es a n d d u ra ti o n s a re k n ow n [3 ]. Wo rs e st il l, in a re a l n et w o rk fl ow s co m e a n d g o u n p re - di ct a bl y a nd di ff eren t fl ow s tak e d iff eren t p ath s! It is in - tr a cta b le to m in im iz e F C T . S o in ste a d co n g es ti o n co n tr o l algorit h m s a re fo cu ssed o n effi cien tly u sin g b ot tl en ec k lin k (an d on ly for lon g- liv ed fl ows) b ecau se th is is easier to ac h iev e. Bu t w e b eliev e - a n d it is th e m ain a rgu m en t o f th is p ap er -t h a t in st ea d o f b ei n g d et er re d b y th e co m p le x it y o f th e pr o bl em , w e sho ul d find a lg o ri thm s tha t co m e cl o se to m in- imizin g F CTs, ev en if th ey a re h eu rist ic. Aw el l- k n ow n a n d si m p le m et h o d th a t co m es cl o se to m in - imizin g F CT is for eac h rou te r to u se pr oc essor-shari n g (P S) –ar o u te r d iv id es o u tg o in g li n k b a n d w id th eq u a ll y a m o n g ACM SIGCOMM Computer Communication Review 59 Volume 36, Number 1, January 2006

FCT impr ovement

Ba nd w id th impr ov ement

FCT impr ov ement

Wh y Flo w-Completion Time is the Right Metric for Cong estion Contr ol Nandita Dukkipati

ComputerSystemsLaboratory StanfordUniversity Stanford,CA94305-9025

nanditad@stanf ord.edu

Nic k McK eo wn

ComputerSystemsLaboratory StanfordUniversity Stanford,CA94305-9025

nic km@stanf ord.edu ABSTRA CT

Userstypicallywanttheirflowstocompleteasquicklyas possible.ThismakesFlowCompletionTime(FCT)anim- portant-arguablythemostimportant-performancemet- ricfortheuser.Yetresearchoncongestioncontrolfocuses almostentirelyonmaximizinglinkthroughput,utilization andfairness,whichmattermoretotheoperatorthanthe user.InthispaperweshowthatwithtypicalInternetflow sizes,existing(TCPReno)andnewlyproposed(XCP)con- gestioncontrolalgorithmsmakeflowslastmuchlongerthan necessary-oftenbyoneortwoordersofmagnitude.In contrast,weshowhowanewandpracticalalgorithm-RCP (RateControlProtocol)-enablesflowstocompletecloseto theminimumpossible.

1. WHY WE SHOULD MAKE FLO WS COMPLETE Q UICKL Y

Whenusersdownloadawebpage,transferafile,send/read email,orinvolvethenetworkinalmostanyinteraction,they wanttheirtransactiontocompleteintheshortesttime;and therefore,theywanttheshortestpossibleflowcompletion time(FCT).1Theycarelessaboutthethroughputofthe network,howefficientlythenetworkisutilized,orthela- tencyofindividualpackets;theyjustwanttheirflowto completeasfastaspossible.Today,mosttransactionsare ofthistypeanditseemslikelythatasignificantamountof trafficwillbeofthistypeinthefuture[1]2.Soitisperhaps surprisingthatalmostallworkoncongestioncontrolfocuses onmetricssuchasthroughput,bottleneckutilizationand fairness.Whilethesemetricsareinteresting–particularly forthenetworkoperator–theyarenotveryinterestingto theuser;infact,highthroughputorefficientnetworkutiliza- tionisnotnecessarilyintheuser’sbestinterest.Certainly, aswewillshow,thesemetricsarenotsufficienttoensurea quickFCT. Intuitionsuggeststhatasnetworkbandwidthincreases flowsshouldfinishproportionallyfaster.Forthecurrent Internet,withTCP,thisintuitioniswrong.Figure1shows howimprovementsinlinkbandwidthhavenotreducedFCT bymuchintheInternetoverthepast25years.Witha100- foldincreaseinbandwidth,FCThasreducedbyonly50% fortypicaldownloads.Whilepropagationdelaywillalways 1FCT=timefromwhenthefirstpacketofaflowissent(inTCP, thisistheSYNpacket)untilthelastpacketisreceived. 2 Real-timestreamingofaudioandvideoarethemainexceptions,but theyrepresentatinyfractionoftraffic.

1 10

100 1 10

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80]90 Mbps [’81]417 Mbps [’86]1.7 Gbps [’88] 2.5 Gbps [’91]

flow size = 100 MB 10 MB 1 MB Equal improvement in FCT and bandwidth Figure1:Improvementinflow-completio afunctionoflinkbandwidthfortheInte malizedto45Mbpsintroducedin1980.F anRTTof40msandcompleteinTCPsl PlotinspiredbyPatterson’sillustration tencylagsbandwidthincomputersystem placealowerbound,FCTisdominatedbyTCP’s controlmechanismswhichmakeflowslastmu evenifaflowiscapableofcompletingwithinone time(RTT). Socanwedesigncongestioncontrolalgorithm flowsfinishquickly?Unfortunately,itisnotus bletoprovablyminimizetheFCTforflowsina work,eveniftheirarrivaltimesanddurationsare Worsestill,inarealnetworkflowscomeand dictablyanddifferentflowstakedifferentpath tractabletominimizeFCT.Soinsteadconges algorithmsarefocussedonefficientlyusingbot (andonlyforlong-livedflows)becausethisiseasier Butwebelieve-anditisthemainargumento -thatinsteadofbeingdeterredbythecomple problem,weshouldfindalgorithmsthatcomecl imizingFCTs,eveniftheyareheuristic. Awell-knownandsimplemethodthatcomescl imizingFCTisforeachroutertouseprocessor-shari –arouterdividesoutgoinglinkbandwidthequ ACM SIGCOMM Computer Communication Review59Volume 36, Number 1, January 2006

Wh y Flo w-Completion Time is the Right Metric for Cong estion Contr ol Nandita Dukkipati

ComputerSystemsLaboratory StanfordUniversity Stanford,CA94305-9025

nanditad@stanf ord.edu

Nic k McK eo wn

ComputerSystemsLaboratory StanfordUniversity Stanford,CA94305-9025

nic km@stanf ord.edu ABSTRA CT

Userstypicallywanttheirflowstocompleteasquicklyas possible.ThismakesFlowCompletionTime(FCT)anim- portant-arguablythemostimportant-performancemet- ricfortheuser.Yetresearchoncongestioncontrolfocuses almostentirelyonmaximizinglinkthroughput,utilization andfairness,whichmattermoretotheoperatorthanthe user.InthispaperweshowthatwithtypicalInternetflow sizes,existing(TCPReno)andnewlyproposed(XCP)con- gestioncontrolalgorithmsmakeflowslastmuchlongerthan necessary-oftenbyoneortwoordersofmagnitude.In contrast,weshowhowanewandpracticalalgorithm-RCP (RateControlProtocol)-enablesflowstocompletecloseto theminimumpossible.

1. WHY WE SHOULD MAKE FLO WS COMPLETE Q UICKL Y

Whenusersdownloadawebpage,transferafile,send/read email,orinvolvethenetworkinalmostanyinteraction,they wanttheirtransactiontocompleteintheshortesttime;and therefore,theywanttheshortestpossibleflowcompletion time(FCT).1Theycarelessaboutthethroughputofthe network,howefficientlythenetworkisutilized,orthela- tencyofindividualpackets;theyjustwanttheirflowto completeasfastaspossible.Today,mosttransactionsare ofthistypeanditseemslikelythatasignificantamountof trafficwillbeofthistypeinthefuture[1]2 .Soitisperhaps surprisingthatalmostallworkoncongestioncontrolfocuses onmetricssuchasthroughput,bottleneckutilizationand fairness.Whilethesemetricsareinteresting–particularly forthenetworkoperator–theyarenotveryinterestingto theuser;infact,highthroughputorefficientnetworkutiliza- tionisnotnecessarilyintheuser’sbestinterest.Certainly, aswewillshow,thesemetricsarenotsufficienttoensurea quickFCT. Intuitionsuggeststhatasnetworkbandwidthincreases flowsshouldfinishproportionallyfaster.Forthecurrent Internet,withTCP,thisintuitioniswrong.Figure1shows howimprovementsinlinkbandwidthhavenotreducedFCT bymuchintheInternetoverthepast25years.Witha100- foldincreaseinbandwidth,FCThasreducedbyonly50% fortypicaldownloads.Whilepropagationdelaywillalways 1FCT=timefromwhenthefirstpacketofaflowissent(inTCP, thisistheSYNpacket)untilthelastpacketisreceived. 2 Real-timestreamingofaudioandvideoarethemainexceptions,but theyrepresentatinyfractionoftraffic.

1 10

100 1 10

Relative FCT Improvement

Relative Bandwidth Improvement

45 Mbps [’80]90 Mbps [’81]417 Mbps [’86]1.7 Gbps [’88] 2.5 Gbps [’91]

flow size = 100 MB 10 MB 1 MB Equal improvement in FCT and bandwidth Figure1:Improvementinflow-completio afunctionoflinkbandwidthfortheInte malizedto45Mbpsintroducedin1980.F anRTTof40msandcompleteinTCPsl PlotinspiredbyPatterson’sillustration tencylagsbandwidthincomputersystem placealowerbound,FCTisdominatedbyTCP’s controlmechanismswhichmakeflowslastmu evenifaflowiscapableofcompletingwithinone time(RTT). Socanwedesigncongestioncontrolalgorithm flowsfinishquickly?Unfortunately,itisnotus bletoprovablyminimizetheFCTforflowsina work,eveniftheirarrivaltimesanddurationsare Worsestill,inarealnetworkflowscomeand dictablyanddifferentflowstakedifferentpath tractabletominimizeFCT.Soinsteadconges algorithmsarefocussedonefficientlyusingbot (andonlyforlong-livedflows)becausethisiseasier Butwebelieve-anditisthemainargumento -thatinsteadofbeingdeterredbythecomple problem,weshouldfindalgorithmsthatcomecl imizingFCTs,eveniftheyareheuristic. Awell-knownandsimplemethodthatcomescl imizingFCTisforeachroutertouseprocessor-shari –arouterdividesoutgoinglinkbandwidthequ ACM SIGCOMM Computer Communication Review59Volume 36, Number 1, January 2006

A C M C C R, 2 0 0 6

ACM CCR 2006

(29)

WIRELESS NETWORKING, 430.752B, 2020 SPRING SEOUL NATIONAL UNIVERSITY

4

Beyond Flow Completion Time?

□ How long does https://www.google.com take to load?

□ What’s the best video quality I can watch, without the “buffering”?

□ How long does my Hadoop job take?

□ ...

Also, we want consistent, predictable performance!

What about Fairness?!

□ Suppose a network is flow fair.

How useful is that?

“Both the thing being allocated (rate) and what it is allocated among (flows) are completely daft—both unrealistic and impractical.”

But what about fairness?!

Suppose a network is flow fair. How useful is that?

“Both the thing being allocated (rate) and what it is allocated among (flows) are completely daft—both unrealistic and impractical.”

Flow Rate Fairn ess: Dismantlin g a Religion

Bob Briscoe BT Research & UCL

bob.briscoe@bt.com

ABSTRACT

Resource allocation and accountability keep reappearing on every list of requirements for theInternet architecture. The reason we neverresolve these issues is a broken idea of what the problem is. The applied research and standards com- munities are using completely unrealistic and impractical fairness criteria.The resulting mechanisms don’teven allo- cate the right thing and they don’t allocate it between the right entities. We explain as bluntly as we can thatthinking about fairness mechanisms like TCP in terms of sharing out flow rates has no intellectual heritage from anyconcept of fairness in philosophy or social science, or indeedreal life.

Comparing flowrates should never again be usedfor claims of fairness in production networks. Instead, we should judge fairness mechanisms on how theyshare out the ‘cost’ of each user’s actions onothers.

Categories and SubjectDescriptors

K.4.1 [Computers and Society]: Public Policy Issues;

C.2.1 [Computer-communication networks]: Network Architecture andDesign

General Terms

Economics, Security

Keywords

Resource allocation, congestioncontrol, fairness, account- ability, identity

1. INTRODUCTION

“But he has nothing on at all.”

The Emperor’sNew Clothes, HansChristian Andersen This paper is deliberately destructive. It sets out to de- stroy an ideologythat is blockingprogress—the idea that fairness betweenmultiplexed packet traffic can beachieved by controlling relative flow ratesalone. Flow rate fairness was the goal behind fair resource allocation inwidely de- like weighted fair queuing (WFQ), TCP [8, 1, 11].

solved. Obscured by this brokenidea, we wouldn’t know a good solution froma bad one.

Controlling relative flow rates alone is a completely im- practical way ofgoing about theproblem. To be realistic for large-scale Internet deployment, relative flow ratesshould be the outcome of another fairness mechanism, not the mech- anism itself. That other mechanism should share out the

‘cost’ of one user’s actions on others—how much each user’s transfers restrictother transfers, given capacity constraints.

Then flow rates will depend on a deeper level of fairness that has so far remained unnamed inthe literature, but is best termed ‘cost fairness’.

It really is onlythe idea of flowrate fairness thatneeds destroying—nearly everything we’ve engineered canremain.

The Internet architecture needssome minor additions, but otherwise it is largely already suited to cost fairness.

The metric required to arbitratecost fairness issimply volume of congestion, that is congestion times the bit rate of each user causing it, taken over time. In engineering terms, for each user it can be measuredvery easily as the amount of data the usersent that was dropped. Or with explicit congestion notification (ECN [30]) the amount of each user’s data to have been congestion marked. Importantly, unlike flow rates, this metric integrates easily and correctly across different flows ondifferent paths and across time.

What we call cost fairness has been in the literature for nearly a decade, but it hasn’t been put so bluntly before.

We were movedto spell it out unambiguously (and avoiding maths), becausethis isn’t just some dry academicfairness debate that might change allocation percentages somewhere in the third decimal place. Theoutcomes due toflow rate fairness that wesee on the Internet today are hopelessly unlike the outcomes that would result from cost fairness.

Not that the outcomes we see arethe deliberate intent of flow rate fairness. They are the random result of anabsence of fairness control, because flowrate fairness isn’t even ca- pable of reasoning about questions like, “How many flows is it fair to start between two endpoints?” or, “What rate is fair for a flow that has been running longer than another?”.

While everyoneprevaricates, novel p2p applications have started to thoroughly exploit thisarchitectural vacuum with no guilt or shame, by just running more flows forlonger (af- ter all, they areusing TCP, which is fair isn’t it?). In re- sponse some ISPs are deploying kludges like volume caps or throttling specific applications using deep packet inspection.

turned into an arms race.

ACM CCR, 2007

15

ACM CCR 2007

(30)

Let’s Talk about Reliability

□ Three particular considerations with reliability

§ The end-to-end argument

§ The fate-sharing principle

§ Packet vs. circuit switching

□ What if no reliable transport is provided?

§ Programmer burden

• Every application that needs reliability has to engineer it from scratch

§ Much higher likelihood of bugs

§ Wasteful effort

(31)

What if the Network Layer tried to provide 6

Reliable Delivery?

□ Reliable (or unreliable) transport ...built on...

Best-effort global packet delivery

Example: reliably transfer file from host A to B

Solution 1:

Check reliability at every step (involving network layer)

Solution 2:

Allow unreliable steps (network layer is best-effort) B checks and tells A to retry on failure

A

B

• Solution 1:

Check reliability at every step (involving network layer)

• Solution 2:

Allow unreliable steps (network layer is best-effort) B checks and tells A to retry on failure (Can still fail, but only if A / B themselves fail.)

Reliable

(32)

Should we ever Implement Reliability in the Network? Question: should we ever implement reliability in the network?

A

B

Yes, some, to reduce the number of end-end retries needed!

P (retry) = 1 - 0.90 10 = 0.65 Total 10 links

P (retry) = 1 - 0.99 10 = 0.10

!29

P(error) = 0.1 or 0.01

□ Implementing reliability in the network ...

§ ... does not reduce end-host complexity

§ ... does increase network complexity

§ ... often imposes overhead for apps that don’t need it

§ ... but can enhance performance in some cases

(33)

8

End-to-End Argument Interpretations

1. Only if sufficient

§ Don’t implement a function at a lower layer unless that is complete 2. Only if necessary

§ Don’t implement a function at a lower layer unless hosts cannot 3. Only if useful

§ If hosts can, implement in-network only as an optimization BUT only if not burdensome for apps that don’t need it

END-to-END Arguments in System Design

J.H. Saltzer, D.P. Reed, and D.D. Clark*

M.I.T. Laboratory for Computer Science

IEEE ICDCS 1981

(34)

WIRELESS NETWORKING, 430.752B, 2020 SPRING SEOUL NATIONAL UNIVERSITY

The Fate-Sharing Principle

How do we prevent this?

A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer

unusable.

— Leslie Lamport, Microsoft Research

A distributed system is one in which the failure of a computer you didn't even know existed can

render your own computer unusable.

— Leslie Lamport, Microsoft Research

Photo: lamport.org

How do we prevent this?

(35)

10

The Fate-Sharing Principle

□ When storing state in a distributed system, co-locate it with entities that rely on that state!

□ State is lost only if those entities fail; then it doesn’t matter.

e.g., network connection state at end hosts.

□ Survivability in Networking

§ End-points should be able to continue communicating without

resetting conversation, even under failures.

(36)

Packet vs. Circuit Switching

□ One problem with circuit switching is that it doesn’t route around trouble

!41

Circuit is established switch fails

A is forced to signal a new circuit to restore communication

One problem with circuit switching is that it doesn’t route around trouble

B A

Circuit is established

A is forced to signal a new circuit to restore communication.

Switch fails

(37)

12

Packet vs. Circuit Switching

□ One problem with circuit switching is that it doesn’t route around trouble

Route recomputed on the fly by s2

switch fails

route recomputed on the fly by s2

Packet switching routes around trouble

B A

s1

s2

s3

s4

s5

Switch fails

Packet switching beats Circuit switching with respect to resilience and efficiency.

(38)

Packet vs. Circuit Switching

Advantages

□ efficient use of resources simpler to implement than circuit switching

□ route around trouble

Disadvantages

□ unpredictable performance

□ requires buffer management and congestion control

Reservation makes sense when Peak / Average ratio is small

Voice traffic has a ratio of 3 or so Reservation wastes capacity when Peak / Average ratio is big

Data applications are bursty, ratios

>100 are common

Is this true for Internet video?

Source: www.smartinsights.com

(39)

14

Quiz 0 (Submit it within 20 minutes and Leave)

□ Write down the followings in the header of your A4 paper.

§ [Quiz 0] “Your student ID” – “Your name”

§ Submit it with the email title: [2020S WN] Quiz # Student ID Your Name

• Toward nxclab@gmail.com

□ Explain the following terms one by one and describe further how these terms are correlated to each other as much as detailed. (Define notations if needed.)

§ Doppler shift

§ Doppler spread

§ Delay spread

§ Coherence distance

§ Coherence time

§ Coherence bandwidth

참조

관련 문서

Department of Naval Architecture and Ocean Engineering, Seoul National University of College of Engineering.. Ship Motion &amp; Wave Load

Department of Naval Architecture and Ocean Engineering, Seoul National University of College of Engineering@. 서울대학교 조선해양공학과 학부4학년

School of Computer Science &amp; Engineering Seoul

Department of Naval Architecture and Ocean Engineering, Seoul National University.. Naval Architecture

School of Mechanical and Aerospace Engineering Seoul National University..

School of Mechanical and Aerospace Engineering Seoul National University..

Department of Naval Architecture and Ocean Engineering, Seoul National University of College

Department of Naval Architecture and Ocean Engineering, Seoul National University of College of