Recent Updates Page 2 Toggle Comment Threads | Keyboard Shortcuts

  • lenz 12:57 on 2015-08-26 Permalink
    Tags: froscon, , , ,   

    Slides and Video of my talk “The Evolution of Storage on Linux” at FrOSCon 2015 

    Last weekend, I gave a presentation titled “The Evolution of Storage on Linux” at this year’s FrOSCon 10 conference (Happy Anniversary!). In case you have missed it, you can find the slides and video recording below. Thanks to the FrOSCon team for having me, it’s always a pleasure to be there!

    Unfortunately I had some technical issues in the beginning and was somewhat too ambitious with regards to the topics I wanted to cover, so I ran out of time. There is simply too much cool stuff happening in the storage space – but I hope that the audience still enjoyed it!

    Slide deck:

    Video:

     
  • lenz 13:35 on 2015-07-16 Permalink
    Tags: , licensing, ,   

    Creating a Contributor Agreement for Your Project 

    Back in the MySQL days, there was a need to have a contributor agreement that made it clear under which terms code contributions to the MySQL code base could be accepted. This was a requirement due to the dual-licensing model of MySQL, under which the software was available both under the GPL and a proprietary license.

    This agreement was further refined when MySQL was acquired by Sun Microsystems in 2008, which resulted in the “Sun Contributor Agreement” (SCA), which was used for all Open Source projects that were sponsored/governed by Sun Microsystems (e.g. OpenOffice.org, Java, etc.).

    The text of the agreement itself was licensed under a creative commons license (Creative Commons Attribution-Share Alike 3.0 Unported), and it was later used as the basis for contributor agreements of several other Open Source Projects, e.g. MariaDB or OwnCloud (even though both fail to give proper attribution to the original). In fact, the agreement still exists as the Oracle Contributor Agreement today, after Sun Microsystems was acquired by Oracle in 2010. If you would like to submit a patch to MySQL, you first need to get your name on the OCA Signatories List.

    While doing some research on creating such a contributor agreement for openATTIC, I was pointed to this very useful resource: http://contributoragreements.org/

    “Contributor agreements are agreements between an open source or open content project and contributors to the project that set out what the project can do with the respective contribution: code, translation, documentation, artwork, etc. The purpose of such agreements is to make the terms under which contributions are made explicit and thereby protect the project, the users of the project’s code or content, and often the contributors themselves. Contributor agreements provide confidence that the guardian of a project’s output has the necessary rights over all contributions to allow for distribution of the product under either any license or any license that is compliant with specific principles.”

    The nice part about this web site: it provides a guided Contributor License Agreement Chooser that allows you to compile a custom agreement based on the requirements (e.g. Copyright assignment, Patent clauses) that you define, similar to the Creative Commons License Chooser, where you can select the terms and conditions of your license to be guided to the appropriate choice.

    So in case your project needs a contributor agreement, please don’t re-invent the wheel and consider making use of this site instead! There are way too many custom agreements floating around already…

     
  • lenz 14:58 on 2015-07-10 Permalink  

    Interview on the it-novum Business Open Source Blog 

    My new employer performed a short interview with me about myself, my role in the openATTIC team and my thoughts on Open Source. Hope you enjoy it!

     
  • lenz 17:35 on 2015-07-01 Permalink
    Tags: , , , ,   

    Moving on 

    Today, I’ve started a new chapter in my career: I’ve left TeamDrive Systems after 1.5 years and joined it-novum in Fulda, where I will be responsible for their open source storage solution openATTIC as a Senior Product Manager in their infrastructure group.

    (More …)

     
  • lenz 13:26 on 2015-06-19 Permalink
    Tags:   

    Some useful issue queries for Atlassian JIRA 

    I wanted to share a few queries I created for Atlassian JIRA, that help me to keep track of my activities (e.g. for a weekly report, or to keep track of issues I submitted to other developers) – maybe they are useful for you, too:

    • “My issues reported last week”:
      reporter = currentUser() AND createdDate >= startOfWeek(-1w) AND createdDate < startOfWeek() ORDER BY created DESC
    • “My issues resolved last week”:
      assignee = currentUser() AND Status = Resolved AND updatedDate >= startOfWeek(-1w)
    • “Open issues reported by me”:
      reporter = currentUser() AND assignee != currentUser() AND resolution = Unresolved ORDER BY createdDate DESC
     
  • lenz 13:56 on 2015-06-04 Permalink
    Tags: , photography   

    Pretty nifty, a site dedicated to tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software: https://pixls.us/

     
  • lenz 12:22 on 2015-06-01 Permalink
    Tags: , ,   

    Shared storage in the cloud 

    While virtualization makes it pretty easy to spawn up new VMs quickly (e.g. for load balancing purposes), I always felt that providing concurrent file-based access to the same data to these VMs has been somewhat cumbersome, even though it’s still a requirement for many applications that need to share data between parts of the application, or multiple instances thereof.

    If you didn’t have some kind of SAN/NAS solution in your data center, it usually involved quirks creative solutions on the VM side (e.g. setting up a VM instance that acted as a central file service via NFS/SMB, or using a shared disk file system like GFS2 or OCFS2). But even if you did, the underlying virtualization technology did not provide any integration or API-based approach to this (at least that was my impression).

    I recently stumbled over Amazon’s Elastic File System (EFS), which was announced on April 9th, 2015. EFS provides shared storage as a service (STaaS) via the NFSv4 protocol. This makes it pretty easy to mount the same share on multiple (Linux-based) VMs. Amazon only charges you for the storage that you actually use (billed monthly, based on the average used during the month), and the use of SSDs should make sure that latency (IOPS) does not suck too badly.

    Interestingly, Microsoft has been offering something similar for almost a year now: Azure File Service was announced on May 12th, 2014 already. It provides shared access to files via the SMB protocol (which makes it suitable for both Windows and Linux-based VMs). In addition to that, Azure File Service also provides a REST API to access and manage objects stored on this service, which makes this service more versatile/flexible. Similar to Amazon, Microsoft only charges for the disk space you actually use.

    Note that both EFS and Azure File Service are still labeled as “Preview” at the time of writing and have certain limitations you should be aware of (Unsupported NFSv4.0 Features in EFS, Amazon EFS Limits During Preview, Features Not Supported By the Azure File Service) – so make sure to have backups of any data you store on them 🙂

    The Open Source community has noticed the requirement for shared file access, too – Red Hat recently announced their participation in OpenStack’s Manila project, which provides a shared file service for this emerging cloud technology. From what I can tell, Manila’s focus currently is more on providing shared storage for OpenStack compute nodes, it’s not entirely clear to me yet if there are any plans to establish this as a solution to provide shared file systems to virtual machines as well (in addition to the object and block storage capabilities they already offer).

     
  • lenz 13:47 on 2015-05-07 Permalink
    Tags: web productivity   

    Learn something new every day: to quickly create a screen shot of a web page in Mozilla Firefox, just press Shift + F2, which opens a small command line interface (including tab completion!). Now type “screenshot <filename>” and you’re done!

     
  • lenz 10:56 on 2015-02-03 Permalink
    Tags: scalability,   

    Back in the good old days of physical servers, you basically had two choices to increase the performance of your application: you either “scaled up”, by migrating to a beefier server with more RAM, CPUs/MHz, or you “scaled out”, by distributing your application load across multiple individual servers.

    Interestingly, I still observe customers applying this way of thinking to virtual environments, using multiple VMs behind a virtual load balancer for scaling out application load.

    Does this approach really make sense anymore? I think that it puts more load on a hypervisor to schedule multiple VMs for handling the workload than it would if the same load would be handled by one single powerful VM instance (with more vCPUs, more vRAM).

    Does “Scale Out” still make any sense in a virtual environment? It probably also depends on the application and if it can effectively scale with more CPUs and memory, but in general I don’t think it is a valid approach.

     
    • Ingo 11:58 on 2015-02-03 Permalink | Reply

      I think it can make sense:

      think of scaling out beyond the limits of physical server
      by combining this with a placement policy to more than one availability zone, so you even would achieve HA with this
      performance wise it could be beneficial on a NUMA host to use VMs that are bound to one NUMA Zone. That could be more performant than crossing all NUMA Zones with one VM.

      just my 0.02 €,
      greets, Ingo

      • lenz 12:07 on 2015-02-03 Permalink | Reply

        Hi Ingo, thanks for your comment!

        Good points, I agree that from an HA perspective there is a valid reason for this kind of setup, but you need to have more than one physical host/hypervisor for this. Also more on the HA side of things is using scale out to be able to perform maintenance tasks one one node, without having to take down the service.

        With regards to NUMA zones, I have no experience about the performance impact of this. My gut feeling is that it actually might be more performant to schedule two VMs in different NUMA zones than having one big VM that crosses NUMA boundaries. I need to do some research about this, to educate myself 🙂

        Thanks,

        Lenz

        • Christian 18:09 on 2015-02-03 Permalink | Reply

          even if there isn’t any performance gain on NUMA (while I’m pretty confident that there IS w/o having any backing numbers on it, either 😉 ) from a Hypervisor perspective it’s still more efficient for it to handle e.g. two 1vCPU vs. one 2vCPU guests – as the VM is only able to demand “I need my CPU ressource” which then would the Hypervisor have to schedule for two physical Cores available to give the 2vCPU VM it’s ressource – even if there was only a single thread task within the VM that asked for it..
          So keep your VMs as small as possible and rather spread single tasks among multiple of those small VMs..

  • lenz 17:45 on 2015-01-31 Permalink
    Tags: ,   

    Summarizing my last year’s professional achievements 

    It’s been a year and a month since I left the Oracle Linux team to join TeamDrive Systems, and I wanted to quickly recap and summarize some of the highlights and notable achievements in my new role.
    (More …)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel