<span id="mktg5"></span>

<i id="mktg5"><meter id="mktg5"></meter></i>

        <label id="mktg5"><meter id="mktg5"></meter></label>
        最新文章專題視頻專題問答1問答10問答100問答1000問答2000關(guān)鍵字專題1關(guān)鍵字專題50關(guān)鍵字專題500關(guān)鍵字專題1500TAG最新視頻文章推薦1 推薦3 推薦5 推薦7 推薦9 推薦11 推薦13 推薦15 推薦17 推薦19 推薦21 推薦23 推薦25 推薦27 推薦29 推薦31 推薦33 推薦35 推薦37視頻文章20視頻文章30視頻文章40視頻文章50視頻文章60 視頻文章70視頻文章80視頻文章90視頻文章100視頻文章120視頻文章140 視頻2關(guān)鍵字專題關(guān)鍵字專題tag2tag3文章專題文章專題2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章專題3
        問答文章1 問答文章501 問答文章1001 問答文章1501 問答文章2001 問答文章2501 問答文章3001 問答文章3501 問答文章4001 問答文章4501 問答文章5001 問答文章5501 問答文章6001 問答文章6501 問答文章7001 問答文章7501 問答文章8001 問答文章8501 問答文章9001 問答文章9501
        當(dāng)前位置: 首頁 - 科技 - 知識百科 - 正文

        Redislatencyspikesandthe99thpercentile

        來源:懂視網(wǎng) 責(zé)編:小采 時間:2020-11-09 12:59:14
        文檔

        Redislatencyspikesandthe99thpercentile

        Redislatencyspikesandthe99thpercentile:One interesting thing about the Stripe blog post about Redis is that they included latency graphs obtained during their tests. In order to persist on disk Redis requires to call the fork() system call. Usually forking using physical server
        推薦度:
        導(dǎo)讀Redislatencyspikesandthe99thpercentile:One interesting thing about the Stripe blog post about Redis is that they included latency graphs obtained during their tests. In order to persist on disk Redis requires to call the fork() system call. Usually forking using physical server

        One interesting thing about the Stripe blog post about Redis is that they included latency graphs obtained during their tests. In order to persist on disk Redis requires to call the fork() system call. Usually forking using physical server

        One interesting thing about the Stripe blog post about Redis is that they included latency graphs obtained during their tests. In order to persist on disk Redis requires to call the fork() system call. Usually forking using physical servers, and most hypervisors, is fast even with big processes. However Xen is slow to fork, so with certain EC2 instance types (and other virtual servers providers as well), it is possible to have serious latency spikes every time the parent process forks in order to persist on disk. The Stripe graph is pretty clear in this regard.

        img://antirez.com/misc/stripe-latency.png

        As you can guess, if you perform a latency test during the fork, all the requests crossing the moment the parent process forks will be delayed up to one second (taking as example the graph above, not sure about what was the process size nor the EC2 instance). This will produce a number of samples with high latency, and will affect the 99th percentile result.

        To change instance type, configuration, setup, or whatever in order to improve this behavior is a good idea, and there are use cases where even a single request having a too high latency is unacceptable. However apparently it is not obvious how latency spikes of 1 second every 30 minutes (or more, if you use AOF with the right rewrite triggers) is very different from latency spikes which are evenly distributed in the set of requests.

        With evenly distributed spikes, if the generation of a page needs to perform a number of requests to a Redis server in order to create the output, it is very likely that a page view will incur in the latency penalty: this impacts the quality of service in a great way potentially, check this link: http://latencytipoftheday.blogspot.it/2014/06/latencytipoftheday-most-page-loads.html.

        However 1 second of latency every 30 minutes run is a completely different thing. For once, the percentile with good latency gets better *as the number of requests increase*, since the more the requests are, the more this second of latency will be unlikely to get over-represented in the samples (if you have just 1 request per minute, and one of those requests happen to hit the high latency, it will affect the 99.99th percentile much more than what happens with 100 requests per second).

        Second: most page views will be unaffected. The only users that will see the 1 second delay are the ones that make a request crossing the fork call. All the other requests will experience an extremely low probability of hitting a request that has a latency which is significantly worse than the average latency. Also note that a page view crossing the fork time, even when composed of 100 requests, can’t be delayed for more than a second, since the requests are completed as soon as the fork() call terminates.

        The bottom line here is that, if there are hard latency requirements for each single request, it is clear that a setup where a request can be delayed 1 second from time to time is a big problem. However when the goal is to provide a good quality of service, the distribution of the latency spikes have a huge effect on the outcome. Redis latency spikes due to fork on Xen are isolated points in the line of time, so they affect a percentage of page views, even when the page views are composed of a big number of Redis requests, which is proportional to the latency spike total time percentage, which is, 1 second every 1800 seconds in this case, so only the 0.05% of the page views will be affected.

        Latency characteristics are hard to capture with a single metric: the full percentile curve and the distribution of the spikes, can provide a better picture. In general good rule of thumbs are a good way to start a research, and in general it is true that the average latency is a poor metric. However to promote a rule of thumb into an absolute truth has also its disadvantages, since many complex things remain complex, and in need for close inspection, regardless of our willingness to over simplify them.

        At the same time, fork delays in EC2 instances are one of the worst experiences for Redis users in one of the most popular runtime environments today, so I’m starting to test Redis with EC2 regularly now: we’ll soon have EC2 specific optimization pages on the Redis official documentation, and a way to operate master-slaves replicas with persistence disabled in a safer way.

        If you need EC2 + Redis master with persistence disabled now, the simplest to deploy “quick fix” is to disable automatic restarts of Redis instances, and use Sentinel for failover, so that crashed masters will not automatically return available, and will be failed over by Sentinel. The system administrator can restart the master manually after checking that the failover was successful and there is a new active master.

        EDIT: Make sure to check the Hacker News thread that contains interesting information about EC2, Xen and fork time: https://news.ycombinator.com/item?id=8532851. Also not all the EC2 instances are the same, and certain types provide great fork time on pair with bare metal systems: https://redislabs.com/blog/testing-fork-time-on-awsxen-infrastructure#.VFJQ-JPF8yF Comments

        聲明:本網(wǎng)頁內(nèi)容旨在傳播知識,若有侵權(quán)等問題請及時與本網(wǎng)聯(lián)系,我們將在第一時間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com

        文檔

        Redislatencyspikesandthe99thpercentile

        Redislatencyspikesandthe99thpercentile:One interesting thing about the Stripe blog post about Redis is that they included latency graphs obtained during their tests. In order to persist on disk Redis requires to call the fork() system call. Usually forking using physical server
        推薦度:
        標(biāo)簽: and 99 the
        • 熱門焦點

        最新推薦

        猜你喜歡

        熱門推薦

        專題
        Top
        主站蜘蛛池模板: 色婷婷六月亚洲婷婷丁香| 国产自偷亚洲精品页65页| 亚洲女人初试黑人巨高清| 日本免费大黄在线观看| 亚洲国产精品乱码一区二区 | 亚洲国产亚洲片在线观看播放 | 美女无遮挡免费视频网站| 日本不卡免费新一二三区| 亚洲欧美日韩中文二区| 女人被免费视频网站| 亚洲乱妇老熟女爽到高潮的片| 精品免费国产一区二区三区| 国产AV无码专区亚洲AV蜜芽| 国产在线观看www鲁啊鲁免费| 国产亚洲高清在线精品不卡| 亚洲国模精品一区| 香蕉免费在线视频| 亚洲天堂久久精品| 一级女人18毛片免费| 亚洲爆乳成av人在线视菜奈实| 四虎永久精品免费观看| jizz在线免费观看| 久久亚洲精品成人| 国产成人免费网站| 国产亚洲欧美在线观看| 亚洲精品一级无码中文字幕| 拍拍拍无挡视频免费观看1000 | 久久亚洲综合色一区二区三区| 久久免费视频99| 亚洲一区免费在线观看| 麻豆成人精品国产免费| 一级特黄a免费大片| 亚洲va久久久噜噜噜久久男同 | a拍拍男女免费看全片| 亚洲欧洲av综合色无码| 国产精品亚洲综合专区片高清久久久| 国产高潮久久免费观看| 亚洲成人一级电影| 亚洲精品老司机在线观看| 久久A级毛片免费观看| 亚洲AV无码XXX麻豆艾秋|