Linux BFQ Scheduler with SSD

08
2014-07
  • secretformula

    In the past I had always been advised by resources that I can get a performance boost when using SSDs by switching from the default Linux scheduler CFQ to one such as deadline or noop.

    Today after a fresh install of Manjaro linux I looked to see which scheduler was being used by running: cat /sys/block/sdb/queue/scheduler and I was greeted by the following:

    noop deadline cfq [bfq]
    

    I did a bit of searching and I found out that BFQ is a new IO scheduler in linux.

    What are it's performance characteristics when it comes to use with SSD's and in particular is it still advisable to switch to using noop or deadline schedulers?

  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    Should I store my code/projects on my SSD or my secondary drive?
  • fr0man

    I just got a new box. It has an SSD for the primary drive, and a 1TB SATA for the secondary drive. I'm going to run windows and my binaries on the SSD and keep all my downloads/documents/music/etc on the secondary drive.
    My question is should I also keep my Visual Studio Projects and code on the SSD or keep them on the secondary drive? The faster SSD would presumably be better for compiling and indexed searches, but would it be better to keep it on the 2nd drive for a more parallel disk IO situation?


  • Related Answers
  • Christian

    SSD have a much better IO and therefore it makes sense to have your code on the SSD disk.

  • Dennis Williamson

    It depends on the drive you have. The read is will always lose to the SSD, but maybe not on the write performance. Write performance is going to be important during compilations for creating new executables, assemblies and other build artifacts.

    Copy one of your larger solutions to the HD and the SSD and compile both. You'll notice the difference one way or the other and have your decision. My guess is the HD will be faster for compiles/builds but the ssd will win everythig else.

  • KeithB

    I don't know if this is possible in Visual Studio, but the best may be a combination of the two. Put the source code on the SSD, but have the compiled objects written to the HD. This is how we have out make based projects layed out, but for other reasons.

  • Brad Patton

    Hard drive speed is important to overall Visual Studio performance. Scott Guthrie touches on it well in this post:

    Multi-core CPUs on machines have gotten fast enough over the past few years that in most common application scenarios you don't usually end up blocking on available processor capacity in your machine.

    When you are doing development with Visual Studio you end up reading/writing a lot of files, and spend a large amount of time doing disk I/O activity. Large projects and solutions might have hundreds (or thousands) of source files (including images, css, pages, user controls, etc). When you open a project Visual Studio needs to read and parse all source files in it so as to provide intellisense. When you are enlisted in source control and check out a file you are updating files and timestamps on disk. When you do a compilation of a solution, Visual Studio will check for updated assemblies from multiple disk path locations, write out multiple new assemblies to disk when the compilation is done, as well as persist .pdb debugger symbol files on disk with them (all as separate file save operations). When you attach a debugger to a process (the default behavior when you press F5 to run an application), Visual Studio then needs to search and load the debugger symbols of all assemblies and DLLs for the application so as to setup breakpoints.